General

Principal Software Engineer @ Just Eat Takeaway. iOS Infrastructure Engineer. Based in London.
How to Implement a Decentralised CLI Tool Manager
- CLI manager
- tool
- executable
- manager
- swift
- cli
A design to implement a simple, generic and decentralised manager for CLI tools from the perspective of a Swift dev.
Overview It's common for iOS teams to rely on various CLI tools such as SwiftLint, Tuist, and Fastlane. These tools are often installed in different ways. The most common way is to use Homebrew, which is known to lack version pinning and, as Pedro puts it: Homebrew is not able to install and activate multiple versions of the same tool I also fundamentally dislike the tap system for installing dependencies from third-party repositories. Although I don't have concrete data, I feel that most development teams profoundly dislike Homebrew when used beyond the simple installation of individual tools from the command line and the brew taps system is cumbersome and bizarre enough to often discourage developers from using it. Alternatives to manage sets of CLI tools that got traction in the past couple of years are Mint and Mise. As Pedro again says in his article about Mise: The first and most core feature of Mise is the ability to install and activate dev tools. Note that we say "activate" because, unlike Homebrew, Mise differentiates between installing a tool and making a specific version of it available. While beyond the scope of this article, I recommend a great article about installing Swift executables from source with Mise by Natan Rolnik. In this article I describe a CLI tool manager very similar to what I've implemented for my team. I'll simply call it "ToolManager". The tool is designed to: Support installing any external CLI tool distributed in zip archives Support activating specific versions per project Be decentralised (requiring no registry) I believe the decentralisation is an interesting aspect and makes the tool reusable in any development environment. Also, differently from the design of mise and mint, ToolManager doesn't build from source and rather relies on pre-built executables. In the age of GenAI, it's more important than ever to develop critical thinking and learn how to solve problems. For this reason, I won't show the implementation of ToolManager, as it's more important to understand how it's meant to work. The code you'll see in this article supports the overarching design, not the nitty-gritty details of how ToolManager's commands are implemented. If, by the end of the article, you understand how the system should work and are interested in implementing it (perhaps using GenAI), you should be able to convert the design to code fairly easily—hopefully, without losing the joy of coding. I myself am considering implementing ToolManager as an open source project later, as I believe it might be very helpful to many teams, just as its incarnation was (and continues to be) for the platform team at JET. There doesn't seem to be an existing tool with the design described in this article. A different title could have reasonably placed this article in "The easiest X" "series" (1, 2, 3, 4), if I may say so. Design The point here is to learn what implementing a tool manager entails. I'll therefore describe the MVP of ToolManager, leaving out details that would make the design too straightforward to implement. The tool itself is a CLI and it's reasonably implemented in Swift using ArgumentParser like all modern Swift CLI tools are. In its simplest form, ToolManager exposes 3 commands: install: download and installs the tools defined in a spec file (Toolfile.yml) at ~/.toolManager/tools optionally validating the checksum creates symlinks to the installed versions at $(PWD)/.toolManager/active uninstall: clears the entire or partial content of ~/.toolManager/tools clears the content of $(PWD)/.toolManager/active version: returns the version of the tool The install commands allows to specify the location of the spec file using the --spec flag, which defaults to Toolfile.yml in the current directory. The installation of ToolManager should be done in the most raw way, i.e. via a remote script. It'd be quite laughable to rely on Brew, wouldn't it? This practice is commonly used by a variety of tools, for example originally by Tuist (before the introduction of Mise) and... you guessed it... by Brew. We'll see below a basic script to achieve so that you could host on something lik AWS S3 with the desired public permissions. The installation command would be: curl -Ls 'https://my-bucket.s3.eu-west-1.amazonaws.com/install_toolmanager.sh' | bash The version of ToolManager must be defined in the .toolmanager-version file in order for the installation script of the repo to work: echo "1.2.0" > .toolmanager-version ToolManager manages versions of CLI tools but it's not in the business of managing its own versions. Back in the day, Tuist used to use tuistenv to solve this problem. I simply avoid it and have single version of ToolManager available at /usr/local/bin/ that the installation script overrides with the version defined for the project. The version command is used by the script to decide if a download is needed. There will be only one version of ToolManager in the system at a given time, and that's absolutely OK. At this point, it's time to show an example of installation script: #!/bin/bash set -euo pipefail # Fail fast if essential commands are missing. command -v curl >/dev/null || { echo "curl not found, please install it."; exit 1; } command -v unzip >/dev/null || { echo "unzip not found, please install it."; exit 1; } readonly EXEC_NAME="ToolManager" readonly INSTALL_DIR="/usr/local/bin" readonly EXEC_PATH="$INSTALL_DIR/$EXEC_NAME" readonly HOOK_DIR="$HOME/.toolManager" readonly REQUIRED_VERSION=$(cat .toolmanager-version) # Exit if the version file is missing or empty. if [[ -z "$REQUIRED_VERSION" ]]; then echo "Error: .toolmanager-version not found or is empty." >&2 exit 1 fi # Exit if the tool is already installed and up to date. if [[ -f "$EXEC_PATH" ]] && [[ "$($EXEC_PATH version)" == "$REQUIRED_VERSION" ]]; then echo "$EXEC_NAME version $REQUIRED_VERSION is already installed." exit 0 fi # Determine OS and the corresponding zip filename. case "$(uname -s)" in Darwin) ZIP_FILENAME="$EXEC_NAME-macOS.zip" ;; Linux) ZIP_FILENAME="$EXEC_NAME-Linux.zip" ;; *) echo "Unsupported OS: $(uname -s)" >&2; exit 1 ;; esac # Download and install in a temporary directory. TMP_DIR=$(mktemp -d) trap 'rm -rf "$TMP_DIR"' EXIT # Ensure cleanup on script exit. echo "Downloading $EXEC_NAME ($REQUIRED_VERSION)..." DOWNLOAD_URL="https://github.com/MyOrg/$EXEC_NAME/releases/download/$REQUIRED_VERSION/$ZIP_FILENAME" curl -LSsf --output "$TMP_DIR/$ZIP_FILENAME" "$DOWNLOAD_URL" unzip -o -qq "$TMP_DIR/$ZIP_FILENAME" -d "$TMP_DIR" # Use sudo only when the install directory is not writable. SUDO_CMD="" if [[ ! -w "$INSTALL_DIR" ]]; then SUDO_CMD="sudo" fi echo "Installing $EXEC_NAME to $INSTALL_DIR..." $SUDO_CMD mkdir -p "$INSTALL_DIR" $SUDO_CMD mv "$TMP_DIR/$EXEC_NAME" "$EXEC_PATH" $SUDO_CMD chmod +x "$EXEC_PATH" # Download and source the shell hook to complete installation. echo "Installing shell hook..." mkdir -p "$HOOK_DIR" curl -LSsf --output "$HOOK_DIR/shell_hook.sh" "https://my-bucket.s3.eu-west-1.amazonaws.com/shell_hook.sh" # shellcheck source=/dev/null source "$HOOK_DIR/shell_hook.sh" echo "Installation complete." You might have noticed that: the required version of ToolManager (defined in .toolmanager-version) is downloaded from the release from the corresponding GitHub repository if missing locally. The ToolManager repo should have a GHA workflow in place to build, archive and upload the version. a shell_hook script is downloaded and run to insert the following line in the shell profile: [[ -s "$HOME/.toolManager/shell_hook.sh" ]] && source "$HOME/.toolManager/shell_hook.sh". This allows switching location in the terminal and loading the active tools for the current project. Showing an example of shell_hook.sh is in order: #!/bin/bash # Overrides 'cd' to update PATH when entering a directory with a local tool setup. # Add the project-specific bin directory to PATH if it exists. update_tool_path() { local tool_bin_dir="$PWD/.toolManager/active" if [[ -d "$tool_bin_dir" ]]; then export PATH="$tool_bin_dir:$PATH" fi } # Redefine 'cd' to trigger the path update after changing directories. cd() { builtin cd "$@" || return update_tool_path } # --- Installation Logic --- # The following function only runs when this script is sourced by an installer. install_hook() { local rc_file case "${SHELL##*/}" in bash) rc_file="$HOME/.bashrc" ;; zsh) rc_file="$HOME/.zshrc" ;; *) echo "Unsupported shell for hook installation: $SHELL" >&2 return 1 ;; esac # The line to add to the shell's startup file. local hook_line="[[ -s \"$HOME/.toolManager/shell_hook.sh\" ]] && source \"$HOME/.toolManager/shell_hook.sh\"" # Add the hook if it's not already present. if ! grep -Fxq "$hook_line" "$rc_file" &>/dev/null; then printf "\n%s\n" "$hook_line" >> "$rc_file" echo "Shell hook installed in $rc_file. Restart your shell to apply changes." fi } # This check ensures 'install_hook' only runs when sourced, not when executed. if [[ "${BASH_SOURCE[0]}" != "$0" ]]; then install_hook fi Now that we have a working installation of ToolManager, let define our Toolfile.yml in our project folder: --- tools: - name: PackageGenerator binaryPath: PackageGenerator version: 3.3.0 zipUrl: https://github.com/justeattakeaway/PackageGenerator/releases/download/3.3.0/PackageGenerator-macOS.zip - name: SwiftLint binaryPath: swiftlint version: 0.57.0 zipUrl: https://github.com/realm/SwiftLint/releases/download/0.58.2/portable_swiftlint.zip - name: ToggleGen binaryPath: ToggleGen version: 1.0.0 zipUrl: https://github.com/TogglesPlatform/ToggleGen/releases/download/1.0.0/ToggleGen-macOS-universal-binary.zip - name: Tuist binaryPath: tuist version: 4.48.0 zipUrl: https://github.com/tuist/tuist/releases/download/4.54.3/tuist.zip - name: Sourcery binaryPath: bin/sourcery version: 2.2.5 zipUrl: https://github.com/krzysztofzablocki/Sourcery/releases/download/2.2.5/sourcery-2.2.5.zip The install command of ToolManager loads the Toolfile at the root of the repo and for each defined dependency, performs the following: checks if the version of the dependency already exists on the machine if it doesn’t exist, downloads it, unzips it, and places the binary at ~/.toolManager/tools/ (e.g. ~/.toolManager/tools/PackageGenerator/3.3.0/PackageGenerator) creates a symlink to the binary in the project directory from .toolManager/active (e.g. .toolManager/active/PackageGenerator) After running ToolManager install (or ToolManager install --spec=Toolfile.yml), ToolManager should produce the following structure ~ tree ~/.toolManager/tools -L 2 ├── PackageGenerator │ └── 3.3.0 ├── Sourcery │ └── 2.2.5 ├── SwiftLint │ └── 0.57.0 ├── ToggleGen │ └── 1.0.0 └── Tuist └── 4.48.0 and from the project folder ls -la .toolManager/active <redacted> PackageGenerator -> /Users/alberto/.toolManager/tools/PackageGenerator/3.3.0/PackageGenerator <redacted> Sourcery -> /Users/alberto/.toolManager/tools/Sourcery/2.2.5/Sourcery <redacted> SwiftLint -> /Users/alberto/.toolManager/tools/SwiftLint/0.57.0/SwiftLint <redacted> ToggleGen -> /Users/alberto/.toolManager/tools/ToggleGen/1.0.0/ToggleGen <redacted> Tuist -> /Users/alberto/.toolManager/tools/Tuist/4.48.0/Tuist Bumping the versions of some tools in the Toolfile, for example SwiftLint and Tuist, and re-running the install command, should result in the following: ~ tree ~/.toolManager/tools -L 2 ├── PackageGenerator │ └── 3.3.0 ├── Sourcery │ └── 2.2.5 ├── SwiftLint │ ├── 0.57.0 │ └── 0.58.2 ├── ToggleGen │ └── 1.0.0 └── Tuist ├── 4.48.0 └── 4.54.3 ls -la .toolManager/active <redacted> PackageGenerator -> /Users/alberto/.toolManager/tools/PackageGenerator/3.3.0/PackageGenerator <redacted> Sourcery -> /Users/alberto/.toolManager/tools/Sourcery/2.2.5/Sourcery <redacted> SwiftLint -> /Users/alberto/.toolManager/tools/SwiftLint/0.58.2/SwiftLint <redacted> ToggleGen -> /Users/alberto/.toolManager/tools/ToggleGen/1.0.0/ToggleGen <redacted> Tuist -> /Users/alberto/.toolManager/tools/Tuist/4.54.3/Tuist CI Setup On CI, the setup is quite simple. It involves 2 steps: install ToolManager install the tools The commands can be wrapped in GitHub composite actions: name: Install ToolManager runs: using: composite steps: - name: Install ToolManager shell: bash run: curl -Ls 'https://my-bucket.s3.eu-west-1.amazonaws.com/install_toolmanager.sh' | bash name: Install tools inputs: spec: description: The name of the ToolManager spec file required: false default: Toolfile.yml runs: using: composite steps: - name: Install tools shell: bash run: | ToolManager install --spec=${{ inputs.spec }} echo "$PWD/.toolManager/active" >> $GITHUB_PATH simply used in workflows: - name: Install ToolManager uses: ./.github/actions/install-toolmanager - name: Install tools uses: ./.github/actions/install-tools with: spec: Toolfile.yml CLI tools conformance ToolManager can install tools that are made available in zip files, without the need of implementing any particular spec. Depending on the CLI tool, the executable can be at the root of the zip archive or in a subfolder. Sourcery for example places the executable in the bin folder. - name: Sourcery binaryPath: bin/sourcery version: 2.2.5 zipUrl: https://github.com/krzysztofzablocki/Sourcery/releases/download/2.2.5/sourcery-2.2.5.zip GitHub releases are great to host releases as zip files and that's all we need. Ideally, one should decorate the repositories with appropriate release workflows. Following is a simple example that builds a macOS binary. It could be extended to also create a Linux binary. name: Publish Release on: push: tags: - '*' env: CLI_NAME: my-awesome-cli-tool permissions: contents: write jobs: build-and-archive: name: Build and Archive macOS Binary runs-on: macos-latest steps: - name: Checkout repository uses: actions/checkout@v4 - name: Setup Xcode uses: maxim-lobanov/setup-xcode@v1 with: xcode-version: '16.4' - name: Build universal binary run: swift build -c release --arch arm64 --arch x86_64 - name: Archive the binary run: | cd .build/apple/Products/Release/ zip -r "${{ env.CLI_NAME }}-macOS.zip" "${{ env.CLI_NAME }}" - name: Upload artifact for release uses: actions/upload-artifact@v4 with: name: cli-artifact path: .build/apple/Products/Release/${{ env.CLI_NAME }}-macOS.zip create-release: name: Create GitHub Release needs: [build-and-archive] runs-on: ubuntu-latest steps: - name: Download CLI artifact uses: actions/download-artifact@v4 with: name: cli-artifact - name: Create Release and Upload Asset uses: softprops/action-gh-release@v2 with: files: "${{ env.CLI_NAME }}-macOS.zip" A note on version pinning Dependency management systems tend to use a lock file (like Package.resolved in Swift Package manager, Podfile.lock in the old days of CocoaPods, yarn.lock/package-lock.json in JavaScript, etc.). The benefits of using a lock file are mainly 2: Reproducibility It locks the exact versions (including transitive dependencies) so that every team member, CI server, or production environment installs the same versions. Faster installs Dependency managers can skip version resolution if a lock file is present, using it directly to fetch the exact versions, improving speed. We can remove the need for lock files if we pin the versions in the spec (the file defining the tools). If version range operators like the CocoaPods' optimistic operator ~> and the SPM's .upToNextMajor and similar one didn't exist, usages of lock files would lose its utility. While useful, lock files are generally annoying and can create that odd feeling of seeing unexpected updates in pull requests made by others. ToolManager doesn't use a lock file; instead, it requires teams to pin their tools' versions, which I strongly believe is a good practice. This approach comes at the cost of teams having to keep an eye out for patch releases and not leaving updates to the machine, which risks pulling in dependencies that don't respect Semantic Versioning (SemVer). Support for different architectures This design allows to support different architectures. Some CI workflows might only need a Linux runner to reduce the burden on precious macOS instances. Both macOS and Linux can be supported with individual Toolfile that can be specified when running the install command. # on macOS ToolManager install --spec=Toolfile_macOS # on Linux ToolManager install --spec=Toolfile_Linux Conclusion The design described in this article powers the solution implemented at JET and has served our teams successfully since October 2023. JET has always preferred to implement in-house solutions where possible and sensible, and I can say that moving away from Homebrew was a blessing. With this design, the work usually done by a package manager and a central spec repository is shifted to individual components that are only required to publish releases in zip archives, ideally via a release workflow. By decentralising and requiring version pinning, we made ToolManager a simple yet powerful system for managing the installation of CLI tools.

How to setup a Swift Package Registry in Artifactory
- swift
- registry
- artifactory
- package
A quick guide to setting up a Swift Package Registry with Artifactory to speed up builds and streamline dependency management.
Introduction It's very difficult to have GenAI not hallucinate when in comes to Swift Package Registry. No surprise there: the feature is definitely niche, has not been vastly adopted and there's a lack of examples online. As Dave put it, Swift Package Registries had an even rockier start compared to SPM. I've recently implemented a Swift Package Registry on Artifactory for my team and I thought of summarising my experience here since it's still fresh in my head. While some details are left out, the happy path should be covered. I hope with this article to help you all indirectly by providing more material to the LLMs overlords. Problem The main problem that led us to look into Swift Package Registry is due to SPM deep-cloning entire Git repositories for each dependency, which became time-consuming. Our CI jobs took a few minutes just to pull all the Swift packages. For dependencies with very large repositories, such as SendbirdUIKit (which is more than 2GB), one could rely on pre-compiled XCFrameworks as a workaround. Airbnb provides a workaround via the SPM-specific repo for Lottie. A Swift Registry allows to serve dependencies as zip artifacts containing only the required revision, avoiding the deep clone of the git repositories. What is a Swift Package Registry? A Swift Package Registry is a server that stores and vends Swift packages by implementing SE-0292 and the corresponding specification. Instead of relying on Git repositories to source our dependencies, we can use a registry to download them as versioned archives (zip files). swift-package-manager/Documentation/PackageRegistry/PackageRegistryUsage.md at main · swiftlang/swift-package-manager The Package Manager for the Swift Programming Language - swiftlang/swift-package-manager GitHubswiftlang The primary advantages of using a Swift Package Registry are: Reduced CI/CD Pipeline Times: by fetching lightweight zip archives from the registry rather than cloning the entire repositories from GitHub. Improved Developer Machine Performance: the same time savings on CI are reflected on the developers' machines during dependency resolution. Availability: by hosting a registry, teams are no longer dependent on the availability of external source control systems like GitHub, but rather on internal ones (for example, self-hosted Artifactory). Security: injecting vulnerabilities in popular open-source projects is known as a supply chain attack and has become increasingly popular in recent years. A registry allows to adopt a process to trust the sources published on it. Platforms Apple has accepted the Swift Registry specification and implemented support to interact with registries within SPM but has left the implementation of actual registries to third-party platforms. Apple is not in the business of providing a Swift Registry. The main platform having adopted Swift Registries is Artifactory. Artifactory, Your Swift Package Repository JFrog now offers the first and only Swift binary package repository, enabling developers to use JFrog Artifactory for resolving Swift dependencies instead of enterprise source control (Git) systems. JFroggiannit although AWS CodeArtifact, Cloudsmith and Tuist provide support too: New – Add Your Swift Packages to AWS CodeArtifact | Amazon Web Services Starting today, Swift developers who write code for Apple platforms (iOS, iPadOS, macOS, tvOS, watchOS, or visionOS) or for Swift applications running on the server side can use AWS CodeArtifact to securely store and retrieve their package dependencies. CodeArtifact integrates with standard developer tools such as Xcode, xcodebuild, and the Swift Package Manager (the swift […] Amazon Web ServicesSébastien Stormacq Private, secure, hosted Swift registry Cloudsmith offers secure, private Swift registries as a service, with cloud native performance. Book a demo today. Cloudsmith Announcing Tuist Registry We’re thrilled to announce the launch of the Tuist Registry – a new feature that optimizes the resolution of Swift packages in your projects. TuistMarek Fořt The benefits are usually appealing to teams with large apps, hence it's reasonable to believe that only big companies have looked into adopting a registry successfully. Artifactory Setup Let's assume a JFrog Artifactory to host our Swift Package Registry exists at https://packages.acme.com. Artifactory support local, remote, and virtual repositories but a realistic setup consists of only local and virtual repositories. Source: Artifactory Local Repositories are meant to be used for publishing dependencies from CI pipelines. Virtual Repositories are instead meant to be used for resolving (pulling) dependencies on both CI and the developers' machines. Remote repositories are not really relevant in a typical Swift Registry setup. Following the documentation at https://jfrog.com/help/r/jfrog-artifactory-documentation/set-up-a-swift-registry, let's create 2 repositories with the following names: local repository: swift-local virtual repository: swift-virtual Local Setup To pull dependencies from the Swift Package Registry, we need to configure the local environment. 1. Set the Registry URL First, we need to inform SPM about the existence of the registry. We can do this on a per-project basis or globally for the user account. From a package's root directory, run the following command. This will create a .swiftpm/configuration/registries.json file within your project folder. swift package-registry set "https://packages.acme.com/artifactory/api/swift/swift-virtual" The resulting registries.json file will look like this: { "authentication": {}, "registries": { "[default]": { "supportsAvailability": false, "url": "https://packages.acme.com/artifactory/api/swift/swift-virtual" } }, "version": 1 } To set the registry for all your projects, use the --global flag. swift package-registry set --global "https://packages.acme.com/artifactory/api/swift/swift-virtual" This will create the configuration file at ~/.swiftpm/configuration/registries.json. Xcode projects don't support project-level registries nor (in my experience) support scopes other than the default one (i.e. avoid using the --scope flag). 2. Authentication To pull packages, authenticating with Artifactory is usually required. It's reasonable though that your company allows all artifacts from Artifactory to be read without authentication as long as one is connected to the company VPN. In cases where authentication is required, SPM uses a .netrc file in the home directory to find credentials for remote servers. This file is a standard way to handle login information for various network protocols. Using a token generated from the Artifactory dashboard, the line to add to the .netrc file would be: machine packages.acme.com login <your_artifactory_username> password <your_artifactory_token> Alternatively, it's possible to log in using the swift package-registry login command. This command securely stores your token in the system's keychain. swift package-registry login "https://packages.acme.com/artifactory/api/swift/swift-virtual" \ --token <token> # or swift package-registry login "https://packages.acme.com/artifactory/api/swift/swift-virtual" \ --username <username> \ --password <token_treated_as_password> CI/CD Setup On CI, the setup is slightly different as the goals are: to resolve dependencies in CI/CD jobs to publish new package versions in CD jobs for both internal and external dependencies The steps described for the local setup are valid for the resolution on CI too. The interesting part here is how publishing is done. I will assume the usage of GitHub Actions. 1. Retrieving the Artifactory Token The JFrog CLI can be used via the setup-jfrog-cli action to authenticate using the most appropriate method. You might want to wrap the action in a custom composable one exporting the token as the output of a step: TOKEN=$(jf config export) echo "::add-mask::$TOKEN" echo "artifactory-token=$TOKEN">> "$GITHUB_OUTPUT" 2. Logging into the Registry The CI job must log in to the local repository (swift-local) to gain push permissions. The token retrieved in the previous step is used for this purpose. swift package-registry login \ "https://packages.acme.com/artifactory/api/swift/swift-local" \ --token ${{ steps.get-token.outputs.artifactory-token }} 3. Publishing Packages Swift Registry requires archives created with the swift package archive-source command from the dependency folder. E.g. swift package archive-source -o "Alamofire-5.10.2.zip" We could avoid creating the archive and instead download it directly from GitHub releases. curl -L -o Alamofire-5.10.1.zip \ https://github.com/Alamofire/Alamofire/archive/refs/tags/5.10.1.zip Uploading the archive can then be done by using the JFrog CLI that needs customization via the setup-jfrog-cli action. If going down this route, the upload command would be: jf rt upload Alamofire-5.10.1.zip \ https://packages.acme.com/artifactory/api/swift/swift-local/acme/Alamofire/Alamofire-5.10.1.zip There is a specific structure to respect: <REPOSITORY>/<SCOPE>/<NAME>/<NAME>-<VERSION>.zip which is the last part of the above URL: swift-local/acme/Alamofire/Alamofire-5.10.1.zip Too bad that using the steps above causes a downstream problem with SPM not being able to resolve the dependencies in the registry. I tried extensively and couldn't find the reason why SPM wouldn't be happy with how the packages were published. I might have missed something but eventually I necessarily had to switch to use the publish command. Using the swift package-registry publish command instead, doesn't present this issue hence it's the solution adopted in this workflow. swift package-registry publish acme.Alamofire 5.10.1 \ --url https://packages.acme.com/artifactory/api/swift/swift-local \ --scratch-directory $(mktemp -d) To verify the upload and indexing succeeded, check that the uploaded *.zip artifact is available and that the .swift exists (indication that the indexing has occurred). If the specific structure is not respected, the .swift folder wouldn't be generated. Consuming Packages from the Registry Packages The easiest and only documented way to consume a package from a registry is via a Package. In the Package.swift file, declare dependencies using the .package(id:from:) syntax to declare a registry-based dependency. The id is a combination of the scope and the package name. ... dependencies: [ .package(id: "acme.Alamofire", from: "5.10.1"), ], targets: [ .target( name: "MyApp", dependencies: [ .product(name: "Alamofire", package: "acme.Alamofire"), ] ), ... ] ) Run swift package resolve or simply build the Package in Xcode to pull the dependencies. You might bump into transitive dependencies (i.e. dependencies listed in the Package.swift files of the packages published on the registry) pointing to GitHub. In this case, it'd be great to instruct SPM to use the corresponding versions on the registry. The --replace-scm-with-registry flag is designed to work for the entire dependency graph, including transitive dependencies. The cornerstone of associating a registry-hosted package with its GitHub origin is the package-metadata.json file. This file allows to provide essential metadata about the packages at the time of publishing (the --metadata-path flag of the publish command defaults to pacakge-metadata.json). Crucially, it includes a field to specify the source control repository URLs. When swift package resolve --replace-scm-with-registry is executed, SPM queries the configured registry. The registry then uses the information from the package-metadata.json to map the package identity to its corresponding GitHub URL, enabling a smooth and transparent resolution process. The metadata file must conform to the JSON schema defined in SE-0391. It is recommended to include all URL variations (e.g., SSH, HTTPS) for the same repository. E.g. { "repositoryURLs": [ "https://github.com/Alamofire/Alamofire", "https://github.com/Alamofire/Alamofire.git", "git@github.com:Alamofire/Alamofire.git" ] } Printing the dependencies should confirm the source of the dependencies: swift package show-dependencies --replace-scm-with-registry When loading a package with Xcode, the flag can be enabled via an environment variable in the scheme IDEPackageDependencySCMToRegistryTransformation=useRegistryIdentityAndSources Too bad that for packages, the schemes won't load until SPM completes the resolution hence running the following from the terminal would address the issue: defaults write com.apple.dt.Xcode IDEPackageDependencySCMToRegistryTransformation useRegistryIdentityAndSources that can be unset with: defaults delete com.apple.dt.Xcode IDEPackageDependencySCMToRegistryTransformation Xcode It's likely that you'll want to use the registry from Xcode projects for direct dependencies. If using the Tuist registry, it seems you would be able to leverage a Package Collection to add dependencies from the registry from the Xcode UI. Note that until Xcode 26 Beta 1, it's not possible to add registry dependencies directly in the Xcode UI, but if you use Tuist to generate your project (as you should), you can use the Package.registry (introduced with https://github.com/tuist/tuist/pull/7225). E.g. let project = Project( ... packages: [ .registry( identifier: "acme.Alamofire", requirement: .exact(Version(stringLiteral: "5.10.1")) ) ], ... ) If not using Tuist, you'd have to rely on setting IDEPackageDependencySCMToRegistryTransformation either as an environment variable in the scheme or globally via the terminal. You can also use xcodebuild to resolve dependencies using the correct flag: xcodebuild \ -resolvePackageDependencies \ -packageDependencySCMToRegistryTransformation useRegistryIdentityAndSources Conclusions We’ve found that using an in-house Swift registry drastically reduces dependency resolution time and size on disk by downloading only the required revision instead of the entire, potentially large, Git repository. This improvement benefits both CI pipelines and developers’ local environments. Additionally, registries help mitigate the risk of supply chain attacks. As of this writing, Swift registries are not widely adopted, which is reflected in the limited number of platforms that support them. It also shows various bugs I myself bumped into when using particular configurations. source: https://forums.swift.org/t/package-registry-support-in-xcode/73626/19 It's unclear whether adoption will grow and uncertain if Apple will ever address the issues reported by the community, but when a functioning setup is put in place, registries offer an efficient and secure alternative to using XCFrameworks in production builds and reduce both memory and time footprints.

Scalable Continuous Integration for iOS
- CI
- mobile
- iOS
- AWS
- macOS
How Just Eat Takeaway.com leverage AWS, Packer, Terraform and GitHub Actions to manage a CI stack of macOS runners.
Originally published on the Just Eat Takeaway Engineering Blog. How Just Eat Takeaway.com leverage AWS, Packer, Terraform and GitHub Actions to manage a CI stack of macOS runners. Problem At Just Eat Takeaway.com (JET), our journey through continuous integration (CI) reflects a landscape of innovation and adaptation. Historically, JET’s multiple iOS teams operated independently, each employing their distinct CI solutions. The original Just Eat iOS and Android teams had pioneered an in-house CI solution anchored in Jenkins. This setup, detailed in our 2021 article, served as the backbone of our CI practices up until 2020. It was during this period that the iOS team initiated a pivotal migration: moving from in-house Mac Pros and Mac Minis to AWS EC2 macOS instances. Fast forward to 2023, a significant transition occurred within our Continuous Delivery Engineering (CDE) Platform Engineering team. The decision to adopt GitHub Actions company-wide has marked the end of our reliance on Jenkins while other teams are in the process of migrating away from solutions such as CircleCI and GitLab CI. This transition represented a fundamental shift in our CI philosophy. By moving away from Jenkins, we eliminated the need to maintain an instance for the Jenkins server and the complexities of managing how agents connected to it. Our focus then shifted to transforming our Jenkins pipelines into GitHub Actions workflows. This transformation extended beyond mere tool adoption. Our primary goal was to ensure that our macOS instances were not only scalable but also configured in code. We therefore enhanced our global CI practices and set standards across the entire company. Desired state of CI As we embarked on our journey to refine and elevate our CI process, we envisioned a state-of-the-art CI system. Our goals were ambitious yet clear, focusing on scalability, automation, and efficiency. At the time of implementing the system, no other player in the industry seemed to have implemented the complete solution we envisioned. Below is a summary of our desired CI state: Instance setup in code: One primary objective was to enable the definition of the setup of the instances entirely in code. This includes specifying macOS version, Xcode version, Ruby version, and other crucial configurations. For this purpose, the HashiCorp tool Packer, emerged once again as an ideal solution, offering the flexibility and precision we required. IaC (Infrastructure as Code) for macOS instances: To define the infrastructure of our fleet of macOS instances, we leaned towards Terraform, another HashiCorp tool. Terraform provided us with the capability to not only deploy but also to scale and migrate our infrastructure seamlessly, crucially maintaining its state. Auto and Manual Scaling: We wanted the ability to dynamically create CI runners based on demand, ensuring that resources were optimally utilized and available when needed. To optimize resource utilization, especially during off-peak hours, we desired an autoscaling feature. Scaling down our CI runners on weekends when developer activity is minimal was critical to be cost-effective. Automated Connection to GitHub Actions: We aimed for the instances to automatically connect to GitHub Actions as runners upon deployment. This automation was crucial in eliminating manual interventions via SSH or VNC. Multi-Team Use: Our vision included CI runners that could be easily used by multiple teams across different time zones. This would not only maximize the utility of our infrastructure but also encourage reuse and standardization. Centralized Management via GitHub Actions: To further streamline our CI processes, we intended to run all tasks through GitHub Actions workflows. This approach would allow the teams to self-serve and alleviate the need for developers to use Docker or maintain local environments. Getting to the desired state was a journey that presented multiple challenges and constant adjustments to make sure we could migrate smoothly to a new system. Instance setup in code We implemented the desired configuration with Packer leveraging a number of Shell Provisioners and variables to configure the instance. Here are some of the configuration steps: Set user password (to allow remote desktop access) Resize the partition to use all the space available on the EBS volume Start the Apple Remote Desktop agent and enable remote desktop access Update Brew & Install Brew packages Install CloudWatch agent Install rbenv/Ruby/bundler Install Xcode versions Install GitHub Actions actions-runner Copy scripts to connect to GitHub Actions as a runner Copy daemon to start the GitHub Actions self-hosted runner as a service Set macos-init modules to perform provisioning of the first launch While the steps above are naturally configuration steps to perform when creating the AMI, the macos-init modules include steps to perform once the instance becomes available. The create_ami workflow accepts inputs that are eventually passed to Packer to generate the AMI. packer build \ --var ami_name_prefix=${{ env.AMI_NAME_PREFIX }} \ --var region=${{ env.REGION }} \ --var subnet_id=${{ env.SUBNET_ID }} \ --var vpc_id=${{ env.VPC_ID }} \ --var root_volume_size_gb=${{ env.ROOT_VOLUME_SIZE_GB }} \ --var macos_version=${{ inputs.macos-version}} \ --var ruby_version=${{ inputs.ruby-version }} \ --var xcode_versions='${{ steps.parse-xcode-versions.outputs.list }}' \ --var gha_version=${{ inputs.gha-version}} \ bare-metal-runner.pkr.hcl Different teams often use different versions of software, like Xcode. To accommodate this, we permit multiple versions to be installed on the same instance. The choice of which version to use is then determined within the GitHub Actions workflows. The seamless generation of AMIs has proven to be a significant enabler. For example, when Xcode 15.1 was released, we executed this workflow the same evening. In just over two hours, we had an AMI ready to deploy all the runners (it usually takes 70–100 minutes for a macOS AMI with 400GB of EBS volume to become ready after creation). This efficiency enabled our teams to use the new Xcode version just a few hours after its release. IaC (Infrastructure as Code) for macOS instances Initially, we used distinct Terraform modules for each instance to facilitate the deployment and decommissioning of each one. Given the high cost of EC2 Mac instances, we managed this process with caution, carefully balancing host usage while also being mindful of the 24-hour minimum allocation time. We ultimately ended up using Terraform to define a single infrastructure (i.e. a single Terraform module) defining resources such as: aws_key_pair, aws_instance, aws_ami aws_security_group, aws_security_group_rule aws_secretsmanager_secret aws_vpc, aws_subnet aws_cloudwatch_metric_alarm aws_sns_topic, aws_sns_topic_subscription aws_iam_role, aws_iam_policy, aws_iam_role_policy_attachment, aws_iam_instance_profile A crucial part was to use count in aws_instance, setting the value of a variable passed in from deploy_infra workflow. Terraform performs the necessary scaling upon changing the value. We have implemented a workflow to perform Terraform apply and destroy commands for the infrastructure. Only the AMI and the number of instances are required as inputs. terraform ${{ inputs.command }} \ --var ami_name=${{ inputs.ami-name }} \ --var fleet_size=${{ inputs.fleet-size }} \ --auto-approve Using the name of the AMI instead of the ID allows us to use the most recent one that was generated, useful in case of name clashes. variable "ami_name" { type = string } variable "fleet_size" { type = number } data "aws_ami" "bare_metal_gha_runner" { most_recent = true filter { name = "name" values = ["${var.ami_name}"] } ... } resource "aws_instance" "bare_metal" { count = var.fleet_size ami = data.aws_ami.bare_metal_gha_runner.id instance_type = "mac2.metal" tenancy = "host" key_name = aws_key_pair.bare_metal.key_name ... } Instead of maintaining multiple CI instances with varying software configurations, we concluded that it’s simpler and more efficient to have a single, standardised setup. While teams still have the option to create and deploy their unique setups, a smaller, unified system allows for easier support by a single global configuration. Auto and Manual Scaling The deploy_infra workflow allows us to scale on demand but it doesn’t release the underlying dedicated hosts which are the resources that are ultimately billed. The autoscaling solution provided by AWS is great for VMs but gets sensibly more complex when actioned on dedicated hosts. Auto Scaling groups on macOS instances would require a Custom Managed License, a Host Resource Group and, of course, a Launch Template. Using only AWS services appears to be a lot of work to pull things together and the result wouldn’t allow for automatic release of the dedicated hosts. AirBnb mention in their Flexible Continuous Integration for iOS article that an internal scaling service was implemented: An internal scaling service manages the desired capacity of each environment’s Auto Scaling group. Some articles explain how to set up Auto Scaling groups for mac instances (see 1 and 2) but after careful consideration, we agreed that implementing a simple scaling service via GitHub Actions (GHA) was the easiest and most maintainable solution. We implemented 2 GHA workflows to fully automate the weekend autoscaling: Upscaling workflow to n, triggered at a specific time at the beginning of the working week Downscaling workflow to 1, triggered at a specific time at the beginning of the weekend We retain the capability for manual scaling, which is essential for situations where we need to scale down, such as on bank holidays, or scale up, like on release cut days, when activity typically exceeds the usual levels. Additionally, we have implemented a workflow that runs multiple times a day and tries to release all available hosts without an instance attached. This step lifts us from the burden of having to remember to release the hosts. Dedicated hosts take up to 110 minutes to move from the Pending to the Available state due to the scrubbing workflow performed by AWS. Manual scaling can be executed between the times the autoscaling workflows are triggered and they must be resilient to unexpected statuses of the infrastructure, which thankfully Terraform takes care of. Both down and upscaling are covered in the following flowchart: The autoscaling values are defined as configuration variables in the repo settings: It usually takes ~8 minutes for an EC2 mac2.metal instance to become reachable after creation, meaning that we can redeploy the entire infrastructure very quickly. Automated Connection to GitHub Actions We provide some user data when deploying the instances. resource "aws_instance" "bare_metal" { ami = data.aws_ami.bare_metal_gha_runner.id count = var.fleet_size ... user_data = <<EOF { "github_enterprise": "<GHE_ENTERPRISE_NAME>", "github_pat_secret_manager_arn": ${data.aws_secretsmanager_secret_version.ghe_pat.arn}, "github_url": "<GHE_ENTERPRISE_URL>", "runner_group": "CI-MobileTeams", "runner_name": "bare-metal-runner-${count.index + 1}" } EOF The user data is stored in a specific folder by macos-init and we implement a module to copy the content to ~/actions-runner-config.json. ### Group 10 ### [[Module]] Name = "Create actions-runner-config.json from userdata" PriorityGroup = 10 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["/bin/zsh", "-c", 'instanceId="$(curl http://169.254.169.254/latest/meta-data/instance-id)"; if [[ ! -z $instanceId ]]; then cp /usr/local/aws/ec2-macos-init/instances/$instanceId/userdata ~/actions-runner-config.json; fi'] RunAsUser = "ec2-user" which is in turn used by the configure_runner.sh script to configure the GitHub Actions runner. #!/bin/bash GITHUB_ENTERPRISE=$(cat $HOME/actions-runner-config.json | jq -r .github_enterprise) GITHUB_PAT_SECRET_MANAGER_ARN=$(cat $HOME/actions-runner-config.json | jq -r .github_pat_secret_manager_arn) GITHUB_PAT=$(aws secretsmanager get-secret-value --secret-id $GITHUB_PAT_SECRET_MANAGER_ARN | jq -r .SecretString) GITHUB_URL=$(cat $HOME/actions-runner-config.json | jq -r .github_url) RUNNER_GROUP=$(cat $HOME/actions-runner-config.json | jq -r .runner_group) RUNNER_NAME=$(cat $HOME/actions-runner-config.json | jq -r .runner_name) RUNNER_JOIN_TOKEN=` curl -L \ -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $GITHUB_PAT"\ $GITHUB_URL/api/v3/enterprises/$GITHUB_ENTERPRISE/actions/runners/registration-token | jq -r '.token'` MACOS_VERSION=`sw_vers -productVersion` XCODE_VERSIONS=`find /Applications -type d -name "Xcode-*" -maxdepth 1 \ -exec basename {} \; \ | tr '\n' ',' \ | sed 's/,$/\n/' \ | sed 's/.app//g'` $HOME/actions-runner/config.sh \ --unattended \ --url $GITHUB_URL/enterprises/$GITHUB_ENTERPRISE \ --token $RUNNER_JOIN_TOKEN \ --runnergroup $RUNNER_GROUP \ --labels ec2,bare-metal,$RUNNER_NAME,macOS-$MACOS_VERSION,$XCODE_VERSIONS \ --name $RUNNER_NAME \ --replace The above script is run by a macos-init module. ### Group 11 ### [[Module]] Name = "Configure the GHA runner" PriorityGroup = 11 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["/bin/zsh", "-c", "/Users/ec2-user/configure_runner.sh"] RunAsUser = "ec2-user" The GitHub documentation states that it’s possible to create a customized service starting from a provided template. It took some research and various attempts to find the right configuration that allows the connection without having to log in in the UI (over VNC) which would represent a blocker for a complete automation of the deployment. We believe that the single person who managed to get this right is Sébastien Stormacq who provided the correct solution. The connection to GHA can be completed with 2 more modules that install the runner as a service and load the custom daemon. ### Group 12 ### [[Module]] Name = "Run the self-hosted runner application as a service" PriorityGroup = 12 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["/bin/zsh", "-c", "cd /Users/ec2-user/actions-runner && ./svc.sh install"] RunAsUser = "ec2-user" ### Group 13 ### [[Module]] Name = "Launch actions runner daemon" PriorityGroup = 13 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["sudo", "/bin/launchctl", "load", "/Library/LaunchDaemons/com.justeattakeaway.actions-runner-service.plist"] RunAsUser = "ec2-user" Using a daemon instead of an agent (see Creating Launch Daemons and Agents), doesn’t require us to set up any auto-login which on macOS is a bit of a tricky procedure and is best avoided also for security reasons. The following is the content of the daemon for completeness. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.justeattakeaway.actions-runner-service</string> <key>ProgramArguments</key> <array> <string>/Users/ec2-user/actions-runner/runsvc.sh</string> </array> <key>UserName</key> <string>ec2-user</string> <key>GroupName</key> <string>staff</string> <key>WorkingDirectory</key> <string>/Users/ec2-user/actions-runner</string> <key>RunAtLoad</key> <true/> <key>StandardOutPath</key> <string>/Users/ec2-user/Library/Logs/com.justeattakeaway.actions-runner-service/stdout.log</string> <key>StandardErrorPath</key> <string>/Users/ec2-user/Library/Logs/com.justeattakeaway.actions-runner-service/stderr.log</string> <key>EnvironmentVariables</key> <dict> <key>ACTIONS_RUNNER_SVC</key> <string>1</string> </dict> <key>ProcessType</key> <string>Interactive</string> <key>SessionCreate</key> <true/> </dict> </plist> Not long after the deployment, all the steps above are executed and we can appreciate the runners appearing as connected. Multi-Team Use We start the downscaling at 11:59 PM on Fridays and start the upscaling at 6:00 AM on Mondays. These times have been chosen in a way that guarantees a level of service to teams in the UK, the Netherlands (GMT+1) and Canada (Winnipeg is on GMT-6) accounting for BST (British Summer Time) and DST (Daylight Saving Time) too. Times are defined in UTC in the GHA workflow triggers and the local time of the runner is not taken into account. Since the instances are used to build multiple projects and tools owned by different teams, one problem we faced was that instances could get compromised if workflows included unsafe steps (e.g. modifications to global configurations). GitHub Actions has a documentation page about Hardening self-hosted runners specifically stating: Self-hosted runners for GitHub do not have guarantees around running in ephemeral clean virtual machines, and can be persistently compromised by untrusted code in a workflow. We try to combat such potential problems by educating people on how to craft workflows and rely on the quick redeployment of the stack should the instances break. We also run scripts before and after each job to ensure that instances can be reused as much as possible. This includes actions like deleting the simulators’ content, derived data, caches and archives. Centralized Management via GitHub Actions The macOS runners stack is defined in a dedicated macOS-runners repository. We implemented GHA workflows to cover the use cases that allow teams to self-serve: create macOS AMI deploy CI downscale for the weekend* upscale for the working week* release unused hosts* * run without inputs and on a scheduled trigger The runners running the jobs in this repo are small t2.micro Linux instances and come with the AWSCLI installed. An IAM instance role with the correct policies is used to make sure that aws ec2 commands allocate-hosts, describe-hosts and release-hosts could execute and we used jq to parse the API responses. A note on VM runners In this article, we discussed how we’ve used bare metal instances as runners. We have spent a considerable amount of time investigating how we could leverage the Virtualization framework provided by Apple to create virtual machines via Tart. If you’ve grasped the complexity of crafting a CI system of runners on bare metal instances, you can understand that introducing VMs makes the setup sensibly more convoluted which would be best discussed in a separate article. While a setup with Tart VMs has been implemented, we realised that it’s not performant enough to be put to use. Using VMs, the number of runners would double but we preferred to have native performance as the slowdown is over 40% compared to bare metal. Moreover, when it comes to running heavy UI test suites like ours, tests became too flaky. Testing the VMs, we also realised that the standard values of Throughput and IOPS on the EBS volume didn’t seem to be enough and caused disk congestion resulting in an unacceptable slowdown in performance. Here is a quick summary of the setup and the challenges we have faced. Virtual runners require 2 images: one for the VMs (tart) and one for the host (AMI). We use Packer to create VM images (Vanilla, Base, IDE, Tools) with the software required based on the templates provided by Tart and we store the OCI-compliant images on ECR. We create these images on CI with dedicated workflows similar to the one described earlier for bare metal but, in this case, macOS runners (instead of Linux) are required as publishing to ECR is done with tart which runs on macOS. Extra policies are required on the instance role to allow the runner to push to ECR (using temporary_iam_instance_profile_policy_document in Packer’s Amazon EBS). Apple set a limit to the number of VMs that can be run on an instance to 2, which would allow to double the size of the fleet of runners. Creating AMIs hosting 2 VMs is done with Packer and steps include cloning the image from ECR and configuring macos-init modules to run daemons to run the VMs via Tart. Deploying a virtual CI infrastructure is identical to what has already been described for bare metal. Connecting to and interfacing with the VMs happens from within the host. Opening SSH and especially VNC sessions from within the bare metal instances can be very confusing. The version of macOS on the host and the one on the VMs could differ. The version used on the host must be provided with an AMI by AWS, while the version used on the VMs is provided by Apple in IPSW files (see ipsw.me). The GHA runners run on the VMs meaning that the host won’t require Xcode installed nor any other software used by the workflows. VMs don’t allow for provisioning meaning we have to share configurations with the VMs via shared folders on the host with the — dir flag which causes extra setup complexity. VMs can’t easily run the GHA runner as a service. The svc script requires the runner to be configured first, an operation that cannot be done during the provisioning of the host. We therefore need to implement an agent ourselves to configure and connect the runner in a single script. To have UI access (a-la VNC) to the VMs, it’s first required to stop the VMs and then run them without the --no-graphics flag. At the time of writing, copy-pasting won’t work even if using the --vnc or --vnc-experimental flags. Tartelet is a macOS app on top of Tart that allows to manage multiple GitHub Actions runners in ephemeral environments on a single host machine. We didn’t consider it to avoid relying on too much third-party software and because it doesn’t have yet GitHub Enterprise support. Worth noting that the Tart team worked on an orchestration solution named Orchard that seems to be in its initial stage. Conclusion In 2023 we have revamped and globalised our approach to CI. We have migrated from Jenkins to GitHub Actions as the CI/CD solution of choice for the whole group and have profoundly optimised and improved our pipelines introducing a greater level of job parallelisation. We have implemented a brand new scalable setup for bare metal macOS runners leveraging the HashiCorp tools Packer and Terraform. We have also implemented a setup based on Tart virtual machines. We have increased the size of our iOS team over the past few years, now including more than 40 developers, and still managed to be successful with only 5 bare metal instances on average, which is a clear statement of how performant and optimised our setup is. We have extended the capabilities of our Internal Developer Platform with a globalised approach to provide macOS runners; we feel that this setup will stand the test of time and serve well various teams across JET for years to come.

The idea of a Fastlane replacement
Prelude
Fastlane is widely used by iOS teams all around the world. It became the standard de facto to automate common tasks such as building apps, running tests, and uploading builds to App Store Connect. Fastlane has been recently moved under the Mobile Native Foundation which is amazing as Google
Prelude Fastlane is widely used by iOS teams all around the world. It became the standard de facto to automate common tasks such as building apps, running tests, and uploading builds to App Store Connect. Fastlane has been recently moved under the Mobile Native Foundation which is amazing as Google wasn't actively maintaining the project. At Just Eat Takeaway, we have implemented an extensive number of custom lanes to perform domain-specific tasks and used them from our CI. The major problem with Fastlane is that it's written in Ruby. When it was born, using Ruby was a sound choice but iOS developers are not necessarily familiar with such language which represents a barrier to contributing and writing lanes. While Fastlane.swift, a version of Fastlane in Swift, has been in beta for years, it's not a rewrite in Swift but rather a "solution on top" meaning that developers and CI systems still have to rely on Ruby, install related software (rbenv or rvm) and most likely maintain a Gemfile. The average iOS dev knows well that Ruby environments are a pain to deal with and have caused an infinite number of headaches. In recent years, Apple has introduced technologies that would enable a replacement of Fastlane using Swift: Swift Package Manager (SPM) Swift Argument Parser (SAP) Being myself a big fan of CLI tools written in Swift, I soon started maturing the idea of a Fastlane rewrite in Swift in early 2022. I circulated the idea with friends and colleagues for months and the sentiment was clear: it's time for a fresh simil-Fastlane tool written in Swift. Journey Towards the end of 2022, I was determined to start this project. I teamed up with 2 iOS devs (not working at Just Eat Takeaway) and we started working on a design. I was keen on calling this project "Swiftlane" but the preference seemed to be for the name "Interstellar" which was eventually shortened into "Stellar". Fastlane has the concept of Actions and I instinctively thought that in Swift-land, they could take the form of SPM packages. This would make Stellar a modular system with pluggable components. For example, consider the Scan action in Fastlane. It could be a package that solely solves the same problem around testing. My goal was not to implement the plethora of existing Fastlane actions but rather to create a system that allows plugging in any package building on macOS. A sound design of such system was crucial. The Stellar ecosystem I had in mind was composed of 4 parts: Actions Actions are the basic building blocks of the ecosystem. They are packages that define a library product. An action can do anything, from taking care of build tasks to integrating with GitHub. Actions are independent packages that have no knowledge of the Stellar system, which treats them as pluggable components to create higher abstractions. Ideally, actions should expose an executable product (the CLI tool) using SAP calling into the action code. This is not required by Stellar but it’s advisable as a best practice. Official Actions would be hosted in the Stellar organisation on GitHub. Custom Actions could be created using Stellar. Tasks Tasks are specific to a project and implemented by the project developers. They are SAP ParsableCommand or AsyncParsableCommand which use actions to construct complex logic specific to the needs of the project. Executor Executor is a command line tool in the form of a package generated by Stellar. It’s the entry point to the user-defined tasks. Invoking tasks on the Executor is like invoking lanes in Fastlane. Both developers and CI would interface with the Executor (masked as Stellar) to perform all operations. E.g. stellar setup_environment --developer-mode stellar run_unit_tests module=OrderHistory stellar setup_demo_app module=OrderHistory stellar run_ui_tests module=OrderHistory device="iPhone 15 Pro" Stellar CLI Stellar CLI is a command line tool that takes care of the heavy lifting of dealing with the Executor and the Tasks. It allows the integration of Stellar in a project and it should expose the following main commands: init: initialise the project by creating an Exectutor package in the .stellar folder build: builds the Executor generating a binary that is shared with the team members and used by CI create-action: scaffolding to create a new action in the form of a package create-task: scaffolding to create a new task in the form of a package edit: opens the Executor package for editing, similar to tuist edit This design was presented to a restricted group of devs at Just Eat Takeaway and it didn't take long to get an agreement on it. It was clear that once Stellar was completed, we would have integrated it in the codebase. Wider design I believe that a combination of CLI tools can create complex, templateable and customizable stacks to support the creation and growth of iOS codebases. Based on the experience developed at JET working on a large modular project with lots of packages, helper tools and optimised CI pipelines, I wanted Stellar to be eventually part of a set of tools taking the name “Stellar Tools” that could enable the creation and the management of large codebases. Something like the following: Tuist: generates projects and workspaces programmatically PackageGenerator: generates packages using a DSL Stacker: creates a modular iOS project based on a DSL Stellar: automate tasks Workflows: generates GitHub Actions workflows that use Stellar From my old notes: Current state After a few months of development within this team (made of devs not working at Just Eat Takeaway), I realised things were not moving in the direction I desired and I decided it was not beneficial to continue the collaboration with the team. We stopped working on Stellar mainly due to different levels of commitment from each of us and focus on the wrong tasks signalling a lack of project management from my end. For example, a considerable amount of time and effort went into the implementation of a version management system (vastly inspired by the one used in Tuist) that was not part of the scope of the Stellar project. The experience left me bitter and demotivated, learning that sometimes projects are best started alone. We made the repo public on GitHub aware that it was far from being production-ready but in my opinion, it's no doubt a nice, inspiring, MVP. GitHub - StellarTools/Stellar Contribute to StellarTools/Stellar development by creating an account on GitHub. GitHubStellarTools GitHub - StellarTools/ActionDSL Contribute to StellarTools/ActionDSL development by creating an account on GitHub. GitHubStellarTools The intent was then to progress on my own or with my colleagues at JET. As things evolved in 2023, we embarked on big projects that continued to evolve the platform such as a massive migration to GitHub Actions. To this day, we still plan to remove Fastlane as our vision is to rely on external dependencies as little as possible but there is no plan to use Stellar as-is. I suspect that, for the infrastructure team at JET, things will evolve in a way that sees more CLI tools being implemented and more GitHub actions using them.
CloudWatch dashboards and alarms on Mac instances
CloudWatch is great for observing and monitoring resources and applications on AWS, on premises, and on other clouds.
While it's trivial to have the agent running on Linux, it's a bit more involved for mac instances (which are commonly used as CI workers). The support was
CloudWatch is great for observing and monitoring resources and applications on AWS, on premises, and on other clouds. While it's trivial to have the agent running on Linux, it's a bit more involved for mac instances (which are commonly used as CI workers). The support was announced in January 2021 for mac1.metal (Intel/x86_64) and I bumped into some challenges on mac2.metal (M1/ARM64) that the team at AWS helped me solve (see this issue on the GitHub repo). I couldn't find other articles nor precise documentation from AWS which is why I'm writing this article to walk you through a common CloudWatch setup. The given code samples are for the HashiCorp tools Packer and Terraform and focus on mac2.metal instances. I'll cover the following steps: install the CloudWatch agent on mac2.metal instances configure the CloudWatch agent create a CloudWatch dashboard setup CloudWatch alarms setup IAM permissions Install the CloudWatch agent The CloudWatch agent can be installed by downloading the pkg file listed on this page and running the installer. You probably want to bake the agent into your AMI, so here is the Packer code for mac2.metal (ARM): # Install wget via brew provisioner "shell" { inline = [ "source ~/.zshrc", "brew install wget" ] } # Install CloudWatch agent provisioner "shell" { inline = [ "source ~/.zshrc", "wget https://s3.amazonaws.com/amazoncloudwatch-agent/darwin/arm64/latest/amazon-cloudwatch-agent.pkg", "sudo installer -pkg ./amazon-cloudwatch-agent.pkg -target /" ] } For the agent to work, you'll need collectd (https://collectd.org/) to be installed on the machine, which is usually done via brew. Brew installs it at /opt/homebrew/sbin/. This is also a step you want to perform when creating your AMI. # Install collectd via brew provisioner "shell" { inline = [ "source ~/.zshrc", "brew install collectd" ] } Configure the CloudWatch agent In order to run, the agent needs a configuration which can be created using the wizard. This page defines the metric sets that are available. Running the wizard with the command below will allow you to generate a basic json configuration which you can modify later. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard The following is a working configuration for Mac instances so you can skip the process. { "agent": { "metrics_collection_interval": 60, "run_as_user": "root" }, "metrics": { "aggregation_dimensions": [ [ "InstanceId" ] ], "append_dimensions": { "AutoScalingGroupName": "${aws:AutoScalingGroupName}", "ImageId": "${aws:ImageId}", "InstanceId": "${aws:InstanceId}", "InstanceType": "${aws:InstanceType}" }, "metrics_collected": { "collectd": { "collectd_typesdb": [ "/opt/homebrew/opt/collectd/share/collectd/types.db" ], "metrics_aggregation_interval": 60 }, "cpu": { "measurement": [ "cpu_usage_idle", "cpu_usage_iowait", "cpu_usage_user", "cpu_usage_system" ], "metrics_collection_interval": 60, "resources": [ "*" ], "totalcpu": false }, "disk": { "measurement": [ "used_percent", "inodes_free" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, "diskio": { "measurement": [ "io_time", "write_bytes", "read_bytes", "writes", "reads" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, "mem": { "measurement": [ "mem_used_percent" ], "metrics_collection_interval": 60 }, "netstat": { "measurement": [ "tcp_established", "tcp_time_wait" ], "metrics_collection_interval": 60 }, "statsd": { "metrics_aggregation_interval": 60, "metrics_collection_interval": 10, "service_address": ":8125" }, "swap": { "measurement": [ "swap_used_percent" ], "metrics_collection_interval": 60 } } } } I have enhanced the output of the wizard with some reasonable metrics to collect. The configuration created by the wizard is almost working but it's lacking a fundamental config to make it work out-of-the-box: the collectd_typesdb value. Linux and Mac differ when it comes to the location of collectd and types.db, and the agent defaults to the Linux path even if it was built for Mac, causing the following error when trying to run the agent: ======== Error Log ======== 2023-07-23T04:57:28Z E! [telegraf] Error running agent: Error loading config file /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml: error parsing socket_listener, open /usr/share/collectd/types.db: no such file or directory Moreover, the /usr/share/ folder is not writable unless you disable SIP (System Integrity Protection) which cannot be done on EC2 mac instances nor is something you want to do for security reasons. The final configuration is something you want to save in System Manager Parameter Store using the ssm_parameter resource in Terraform: resource "aws_ssm_parameter" "cw_agent_config_darwin" { name = "/cloudwatch-agent/config/darwin" description = "CloudWatch agent config for mac instances" type = "String" value = file("./cw-agent-config-darwin.json") } and use it when running the agent in a provisioning step: resource "null_resource" "run_cloudwatch_agent" { depends_on = [ aws_instance.mac_instance ] connection { type = "ssh" agent = false host = aws_instance.mac_instance.private_ip user = "ec2-user" private_key = tls_private_key.mac_instance.private_key_pem timeout = "30m" } # Run CloudWatch agent provisioner "remote-exec" { inline = [ "sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:${aws_ssm_parameter.cw_agent_config_darwin.name}" ] } } Create a CloudWatch dashboard Once the instances are deployed and running, they will send events to CloudWatch and we can create a dashboard to visualise them. You can create a dashboard manually in the console and once you are happy with it, you can just copy the source code, store it in a file and feed it to Terraform. Here is mine that could probably work for you too if you tag your instances with the Type set to macOS: { "widgets": [ { "height": 15, "width": 24, "y": 0, "x": 0, "type": "explorer", "properties": { "metrics": [ { "metricName": "cpu_usage_user", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "cpu_usage_system", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "disk_used_percent", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "diskio_read_bytes", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "diskio_write_bytes", "resourceType": "AWS::EC2::Instance", "stat": "Average" } ], "aggregateBy": { "key": "", "func": "" }, "labels": [ { "key": "Type", "value": "macOS" } ], "widgetOptions": { "legend": { "position": "bottom" }, "view": "timeSeries", "stacked": false, "rowsPerPage": 50, "widgetsPerRow": 1 }, "period": 60, "splitBy": "", "region": "eu-west-1" } } ] } You can then use the cloudwatch_dashboard resource in Terraform: resource "aws_cloudwatch_dashboard" "mac_instances" { dashboard_name = "mac-instances" dashboard_body = file("./cw-dashboard-mac-instances.json") } It will show something like this: Setup CloudWatch alarms Once the dashboard is up, you should set up alarms so that you are notified of any anomalies, rather than actively monitoring the dashboard for them. What works for me is having alarms triggered via email when the used disk space is going above a certain level (say 80%). We can use the cloudwatch_metric_alarm resource. resource "aws_cloudwatch_metric_alarm" "disk_usage" { alarm_name = "mac-${aws_instance.mac_instance.id}-disk-usage" comparison_operator = "GreaterThanThreshold" evaluation_periods = 30 metric_name = "disk_used_percent" namespace = "CWAgent" period = 120 statistic = "Average" threshold = 80 alarm_actions = [aws_sns_topic.disk_usage.arn] dimensions = { InstanceId = aws_instance.mac_instance.id } } We can then create an SNS topic and subscribe all interested parties to it. This will allow us to broadcast to all subscribers when the alarm is triggered. For this, we can use the sns_topic and sns_topic_subscription resources. resource "aws_sns_topic" "disk_usage" { name = "CW_Alarm_disk_usage_mac_${aws_instance.mac_instance.id}" } resource "aws_sns_topic_subscription" "disk_usage" { for_each = toset(var.alarm_subscriber_emails) topic_arn = aws_sns_topic.disk_usage.arn protocol = "email" endpoint = each.value } variable "alarm_subscriber_emails" { type = list(string) } If you are deploying your infrastructure via GitHub Actions, you can set your subscribers as a workflow input or as an environment variable. Here is how you should pass a list of strings via a variable in Terraform: name: Deploy Mac instance env: ALARM_SUBSCRIBERS: '["user1@example.com","user2@example.com"]' AMI: ... jobs: deploy: ... steps: - name: Terraform apply run: | terraform apply \ --var ami=${{ env.AMI }} \ --var alarm_subscriber_emails='${{ env.ALARM_SUBSCRIBERS }}' \ --auto-approve Setup IAM permissions The instance that performs the deployment requires permissions for CloudWatch, System Manager, and SNS. The following is a policy that is enough to perform both terraform apply and terraform destroy. Please consider restricting to specific resources. { "Version": "2012-10-17", "Statement": [ { "Sid": "CloudWatchDashboardsPermissions", "Effect": "Allow", "Action": [ "cloudwatch:DeleteDashboards", "cloudwatch:GetDashboard", "cloudwatch:ListDashboards", "cloudwatch:PutDashboard" ], "Resource": "*" }, { "Sid": "CloudWatchAlertsPermissions", "Effect": "Allow", "Action": [ "cloudwatch:DescribeAlarms", "cloudwatch:DescribeAlarmsForMetric", "cloudwatch:DescribeAlarmHistory", "cloudwatch:DeleteAlarms", "cloudwatch:DisableAlarmActions", "cloudwatch:EnableAlarmActions", "cloudwatch:ListTagsForResource", "cloudwatch:PutMetricAlarm", "cloudwatch:PutCompositeAlarm", "cloudwatch:SetAlarmState" ], "Resource": "*" }, { "Sid": "SystemsManagerPermissions", "Effect": "Allow", "Action": [ "ssm:GetParameter", "ssm:GetParameters", "ssm:ListTagsForResource", "ssm:DeleteParameter", "ssm:DescribeParameters", "ssm:PutParameter" ], "Resource": "*" }, { "Sid": "SNSPermissions", "Effect": "Allow", "Action": [ "sns:CreateTopic", "sns:DeleteTopic", "sns:GetTopicAttributes", "sns:GetSubscriptionAttributes", "sns:ListSubscriptions", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "sns:SetSubscriptionAttributes", "sns:SetTopicAttributes", "sns:Subscribe", "sns:Unsubscribe" ], "Resource": "*" } ] } On the other hand, to send logs to CloudWatch, the Mac instances require permissions given by the CloudWatchAgentServerPolicy: resource "aws_iam_role_policy_attachment" "mac_instance_iam_role_cw_policy_attachment" { role = aws_iam_role.mac_instance_iam_role.name policy_arn = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy" } Conclusion You have now defined your CloudWatch dashboard and alarms using "Infrastructure as Code" via Packer and Terraform. I've covered the common use case of instances running out of space on disk which is useful to catch before CI starts becoming unresponsive slowing your team down.
Easy connection to AWS Mac instances with EC2macConnector
Overview
Amazon Web Services (AWS) provides EC2 Mac instances commonly used as CI workers. Configuring them can be either a manual or an automated process, depending on the DevOps and Platform Engineering experience in your company. No matter what process you adopt, it is sometimes useful to log into the
Overview Amazon Web Services (AWS) provides EC2 Mac instances commonly used as CI workers. Configuring them can be either a manual or an automated process, depending on the DevOps and Platform Engineering experience in your company. No matter what process you adopt, it is sometimes useful to log into the instances to investigate problems. EC2macConnector is a CLI tool written in Swift that simplifies the process of connecting over SSH and VNC for DevOps engineers, removing the need of updating private keys and maintaining the list of IPs that change across deployment cycles. Connecting to EC2 Mac instances without EC2macConnector AWS documentation describes the steps needed to allow connecting via VNC: Start the Apple Remote Desktop agent and enable remote desktop access on the instance Set the password for the ec2-user user on the instance to allow connecting over VNC Start an SSH session Connect over VNC Assuming steps 1 and 2 and done, steps 3 and 4 are usually manual and repetitive: the private keys and IPs usually change across deployments which could happen frequently, even daily. Here is how to start an SSH session in the terminal binding a port locally: ssh ec2-user@<instance_IP> \ -L <local_port>:localhost:5900 \ -i <path_to_privae_key> \ To connect over VNC you can type the following in Finder → Go → Connect to Server (⌘ + K) and click Connect: vnc://ec2-user@localhost:<local_port> or you could create a .vncloc file with the following content and simply open it: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "<http://www.apple.com/DTDs/PropertyList-1.0.dtd>"> <plist version="1.0"> <dict> <key>URL</key> <string>vnc://ec2-user@localhost:<local_port></string> </dict> </plist> If you are a system administrator, you might consider EC2 Instance Connect, but sadly, in my experience, it's not a working option for EC2 Mac instances even though I couldn't find evidence confirming or denying this statement. Administrators could also consider using Apple Remote Desktop which comes with a price tag of $/£79.99. Connecting to EC2 Mac instances with EC2macConnector EC2macConnector is a simple and free tool that works in 2 steps: the configure command fetches the private keys and the IP addresses of the running EC2 Mac instances in a given region, and creates files using the said information to connect over SSH and VNC: ec2macConnector configure \ --region <aws_region> \ --secrets-prefix <mac_metal_private_keys_prefix> Read below or the README for more information on the secrets prefix value. the connect command connects to the instances via SSH or VNC. ec2macConnector connect --region <aws_region> <fleet_index> ec2macConnector connect --region <aws_region> <fleet_index> --vnc 💡 Connecting over VNC requires an SSH session to be established first. As with any tool written using ArgumentParser, use the --help flag to get more information. Requirements There are some requirements to respect for the tool to work: Permissions EC2macConnector requires AWS credentials either set as environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) or configured in ~/.aws/credentials via the AWS CLI. Environment variables take precedence. The user must be granted the following permissions: ec2:DescribeInstances secretsmanager:ListSecrets secretsmanager:GetSecretValue EC2 instances The EC2 Mac instances must have the EC2macConnector:FleetIndex tag set to the index of the instance in the fleet. Indexes should start at 1. Instances that don't have the said tag will be ignored. Secrets and key pairs formats EC2macConnector assumes that the private key for each instance key pair is stored in SecretsManagers. The name of the key pair could and should be different from the secret ID. For example, the instance key pair should include an incremental number also part of the corresponding secret ID. Consider that the number of Mac instances in an AWS account is limited and it's convenient to refer to them using an index starting at 1. It's good practice for the secret ID to also include a nonce as secrets with the same name cannot be recreated before the deletion period has elapsed, allowing frequent provisioning-decommissioning cycles. For the above reasons, EC2macConnector assumes the following formats are used: instance key pairs: <keypair_prefix>_<index_of_instance_in_fleet> e.g. mac_instance_key_pair_5 secret IDs: <secrets_prefix>_<index_of_instance_in_fleet>_<nonce> e.g. private_key_mac_metal_5_dx9Wna73B EC2macConnector Under the hood The configure command: downloads the private keys in the ~/.ssh folder creates scripts to connect over SSH in ~/.ec2macConnector/<region>/scripts creates vncloc files to connect over VNC in ~/.ec2macConnector/<region>/vnclocs ➜ .ec2macConnector tree ~/.ssh /Users/alberto/.ssh ├── mac_metal_1_i-08e4ffd8e9xxxxxxx ├── mac_metal_2_i-07bfff1f52xxxxxxx ├── mac_metal_3_i-020d680610xxxxxxx ├── mac_metal_4_i-08516ac980xxxxxxx ├── mac_metal_5_i-032bedaabexxxxxxx ├── config ├── known_hosts └── ... The connect command: runs the scripts (opens new shells in Terminal and connects to the instances over SSH) opens the vncloc files ➜ .ec2macConnector tree ~/.ec2macConnector /Users/alberto/.ec2macConnector └── us-east-1 ├── scripts │ ├── connect_1.sh │ ├── connect_2.sh │ ├── connect_3.sh │ ├── connect_4.sh │ └── connect_5.sh └── vnclocs ├── connect_1.vncloc ├── connect_2.vncloc ├── connect_3.vncloc ├── connect_4.vncloc └── connect_5.vncloc

Toggles: the easiest feature flagging in Swift
I previously wrote about JustTweak here. It's the feature flagging mechanism we've been using at Just Eat Takeaway.com to power the iOS consumer apps since 2017. It's proved to be very stable and powerful and it has evolved over time. Friends have heard
I previously wrote about JustTweak here. It's the feature flagging mechanism we've been using at Just Eat Takeaway.com to power the iOS consumer apps since 2017. It's proved to be very stable and powerful and it has evolved over time. Friends have heard me promoting it vehemently and some have integrated it with success and appreciation. I don't think I've promoted it in the community enough (it definitely deserved more) but marketing has never been my thing. Anyway, JustTweak grew old and some changes were debatable and not to my taste. I have then decided to use the knowledge of years of working on the feature flagging matter to give this project a new life by rewriting it from scratch as a personal project. And here it is: Toggles. I never tweeted about this side project of mine 😜 It's like JustTweak (feature flagging), but sensibly better. https://t.co/bdGWuUyQEU #Swift #iOS #macOS — Alberto De Bortoli (@albertodebo) March 23, 2023 Think of JustTweak, but better. A lot better. Frankly, I couldn't have written it better. Here are the main highlights: brand new code, obsessively optimized and kept as short and simple as possible extreme performances fully tested fully documented performant UI debug view in SwiftUI standard providers provided demo app provided ability to listen for value changes (using Combine) simpler APIs ToggleGen CLI, to allow code generation ToggleCipher CLI, to allow encoding/decoding of secrets JustTweakMigrator CLI, to allow a smooth transition from JustTweak Read all about it on the repo's README and on the DocC page. It's on Swift Package Index too. Toggles – Swift Package Index Toggles by TogglesPlatform on the Swift Package Index – Toggles is an elegant and powerful solution to feature flagging for Apple platforms. Learn more There are plans (or at least the desire!) to write a backend with Andrea Scuderi. That'd be really nice! @albertodebo This wasn't planned! It looks like we need to build the backend for #Toggles with #Breeze! pic.twitter.com/OxNovRl70L — andreascuderi (@andreascuderi13) March 26, 2023
The Continuous Integration system used by the mobile teams
- iOS
- Continuous Integration
- Jenkins
- DevOps
In this article, we’ll discuss the way our mobile teams have evolved the Continuous Integration (CI) stack over the recent years.
Originally published on the Just Eat Takeaway Engineering Blog. Overview In this article, we’ll discuss the way our mobile teams have evolved the Continuous Integration (CI) stack over the recent years. We don’t have DevOps engineers in our team and, until recently, we had adopted a singular approach in which CI belongs to the whole team and everyone should be able to maintain it. This has proven to be difficult and extremely time-consuming. The Just Eat side of our newly merged entity has a dedicated team providing continuous integration and deployment tools to their teams but they are heavily backend-centric and there has been little interest in implementing solutions tailored for mobile teams. As is often the case in tech companies, there is a missing link between mobile and DevOps teams. The iOS team is the author and first consumer of the solution described but, as you can see, we have ported the same stack to Android as well. We will mainly focus on the iOS implementation in this article, with references to Android as appropriate. 2016–2020 Historically speaking, the iOS UK app was running on Bitrise because it was decided not to invest time in implementing a CI solution, while the Bristol team was using a Jenkins version installed by a different team. This required manual configuration with custom scripts and it had custom in-house hardware. These are two quite different approaches indeed and, at this stage, things were not great but somehow good enough. It’s fair to say we were still young on the DevOps front. When we merged the teams, it was clear that we wanted to unify the CI solution and the obvious choice for a company of our size was to not use a third-party service, bringing us to invest more and more in Jenkins. Only one team member had good knowledge of Jenkins but the rest of the team showed little interest in learning how to configure and maintain it, causing the stack to eventually become a dumping ground of poorly configured jobs. It was during this time that we introduced Fastlane (making the common tasks portable), migrated the UK app from Bitrise to Jenkins, started running the UI tests on Pull Requests, and other small yet sensible improvements. 2020–2021 Starting in mid-2020 the iOS team has significantly revamped its CI stack and given it new life. The main goals we wanted to achieve (and did by early 2021) were: Revisit the pipelines Clear Jenkins configuration and deployment strategy Make use of AWS Mac instances Reduce the pool size of our mac hardware Share our knowledge across teams better Since the start of the pandemic, we have implemented the pipelines in code (bidding farewell to custom bash scripts), we moved to a monorepo which was a massive step ahead and began using SonarQube even more aggressively. We added Slack reporting and PR Assigner, an internal tool implemented by Andrea Antonioni. We also automated the common release tasks such as cutting and completing a release and uploading the dSYMS to Firebase. We surely invested a lot in optimizing various aspects such as running the UI tests in parallel, making use of shallow repo cloning, We also moved to not checking in the pods within the repo. This, eventually, allowed us to reduce the number of agents for easier infrastructure maintenance. Automating the infrastructure deployment of Jenkins was a fundamental shift compared to the previous setup and we have introduced AWS Mac instances replacing part of the fleet of our in-house hardware. CI system setup Let’s take a look at our stack. Before we start, we’d like to thank Isham Araia for having provided a proof of concept for the configuration and deployment of Jenkins. He talked about it at https://ish-ar.io/jenkins-at-scale/ and it represented a fundamental starting point, saving us several days of researching. Triggering flow Starting from the left, we have our repositories (plural, as some shared dependencies don’t live in the monorepo). The repositories contain the pipelines in the form of Jenkinsfiles and they call into Fastlane lanes. Pretty much every action is a lane, from running the tests to archiving for the App Store to creating the release branches. Changes are raised through pull requests that trigger Jenkins. There are other ways to trigger Jenkins: by user interaction (for things such as completing a release or archiving and uploading the app to App Store Connect) and cron triggers (for things such as building the nightly build, running the tests on the develop branch every 12 hours, or uploading the PACT contract to the broker). Once Jenkins has received the information, it will then schedule the jobs to one of the agents in our pool, which is now made up of 5 agents, 2 in the cloud and 3 in-house mac pros. Reporting flow Now that we’ve talked about the first part of the flow, let’s talk about the flow of information coming back at us. Every PR triggers PR Assigner, a tool that works out a list of reviewers to assign to pull requests and notifies engineers via dedicated Slack channels. The pipelines post on Slack, providing info about all the jobs that are being executed so we can read the history without having to log into Jenkins. We have in place the standard notification flow from Jenkins to GitHub to set the status checks and Jenkins also notifies SonarQube to verify that any change meets the quality standards (namely code coverage percentage and coding rules). We also have a smart lambda named SonarQubeStatusProcessor that reports to GitHub, written by Alan Nichols. This is due to a current limitation of SonarQube, which only allows reporting the status of one SQ project to one GitHub repo. Since we have a monorepo structure we had to come up with this neat customization to report the SQ status for all the modules that have changed as part of the PR. Configuration Let’s see what the new interesting parts of Jenkins are. Other than Jenkins itself and several plugins, it’s important to point out JCasC and Job DSL. JCasC stands for Jenkins Configuration as Code, and it allows you to configure Jenkins via a yaml file. The point here is that nobody should ever touch the Jenkins settings directly from the configuration page, in the same way, one ideally shouldn’t apply configuration changes manually in any dashboard. The CasC file is where we define the Slack integration, the user roles, SSO configuration, the number of agents and so on. We could also define the jobs in CasC but we go a step further than that. We use the Job DSL plugin that allows you to configure the jobs in groovy and in much more detail. One job we configure in the CasC file though is the seed job. This is a simple freestyle job that will go pick the jobs defined with Job DSL and create them in Jenkins. Deployment Let’s now discuss how we can get a configured Jenkins instance on EC2. In other words, how do we deploy Jenkins? We use a combination of tools that are bread and butter for DevOps people. The commands on the left spawn a Docker container that calls into the tools on the right. We start with Packer which allows us to create the AMI (Amazon Machine Image) together with Ansible, allowing us to configure an environment easily (much more easily than Chef or Puppet). Running the create-image command the script will: 1. Create a temporary EC2 instance 2. Connect to the instance and execute an ansible playbook Our playbook encompasses a number of steps, here’s a summary: install the Jenkins version for the given Linux distribution install Nginx copy the SSL cert over configure nginx w/ SSL termination and reverse proxy install the plugins for Jenkins Once the playbook is executed, Packer will export an AMI in EC2 with all of this in it and destroy the instance that was used. With the AMI ready, we can now proceed to deploy our Jenkins. For the actual deployment, we use Terraform which allows us to define our infrastructure in code. The deploy command runs Terraform under the hood to set up the infrastructure, here’s a summary of the task: create an IAM Role + IAM Policy configure security groups create the VPC and subnet to use with a specific CIDER block and the subnet create any private key pair to connect over SSH deploy the instance using a static private IP (it has to be static otherwise the A record in Route53 would break) copy the JCasC configuration file over so that when Jenkins starts it picks that up to configure itself The destroy command runs a “terraform destroy” and destroys everything that was created with the deploy command. Deploy and destroy balance each other out. Now that we have Jenkins up and running, we need to give it some credentials so our pipelines are able to work properly. A neat way of doing this is by having the secrets (SSH keys, Firebase tokens, App Store Connect API Key and so forth) in AWS Secrets Manager which is based on KMS and use a Jenkins plugin to allow Jenkins to access them. It’s important to note that developers don’t have to install Packer, Ansible, Terraform or even the AWS CLI locally because the commands run a Docker container that does the real work with all the tools installed. As a result, the only thing one should have installed is really Docker. CI agents Enough said about Jenkins, it’s time to talk about the agents.As you probably already know, in order to run tests, compile and archive iOS apps we need Xcode, which is only available on macOS, so Linux or Windows instances are not going to cut it. We experimented with the recently introduced AWS Mac instances and they are great, ready out-of-the-box with minimal configuration on our end. What we were hoping to get to with this recent work was the ability to leverage the Jenkins Cloud agents. That would have been awesome because it would have allowed us to: let Jenkins manage the agent instances scale the agent pool according to the load on CI Sadly we couldn't go that far. Limitations are: the bootstrapping of a mac1.metal takes around 15 minutes reusing the dedicated host after having stopped an instance can take up to 3 hours — during that time we just pay for a host that is not usable When you stop or terminate a Mac instance, Amazon EC2 performs a scrubbing workflow on the underlying Dedicated Host to erase the internal SSD, to clear the persistent NVRAM variables, and if needed, to update the bridgeOS software on the underlying Mac mini. This ensures that Mac instances provide the same security and data privacy as other EC2 Nitro instances. It also enables you to run the latest macOS AMIs without manually updating the bridgeOS software. During the scrubbing workflow, the Dedicated Host temporarily enters the pending state. If the bridgeOS software does not need to be updated, the scrubbing workflow takes up to 50 minutes to complete. If the bridgeOS software needs to be updated, the scrubbing workflow can take up to 3 hours to complete. Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html In other words: scaling mac instances is not an option and leaving the instances up 24/7 seems to be the easiest option. This is especially valid if your team is distributed and jobs could potentially run over the weekend as well, saving you the hassle of implementing downscaling ahead of the weekend. There are some pricing and instance allocation considerations to make. Note that On-Demand Mac1 Dedicated Hosts have a minimum host allocation and billing duration of 24 hours. “You can purchase Savings Plans to lower your spend on Dedicated Hosts. Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region.” Source: https://aws.amazon.com/ec2/dedicated-hosts/pricing/ The On-Demand rate is $1.207 per hour. I’d like to stress that no CI solution comes for free. I’ve often heard developers indicating that Travis and similar products are cheaper. The truth is that the comparison is not even remotely reasonable: virtual boxes are incredibly slow compared to native Apple hardware and take ridiculous bootstrapping times. Even the smallest projects suffer terribly. One might ask if it’s at least possible to use the same configuration process we used for the Jenkins instance (with Packer and Ansible) but sadly we hit additional limitations: Apple requires 2FA for downloading Xcode via xcode-version Apple requires 2FA for signing into Xcode The above pretty much causes the configuration flow to fall apart making it impossible to configure an instance via Ansible. Cloud agents for Android It was a different story for Android, in which we could easily configure the agent instance with Ansible and therefore leverage the Cloud configuration to allow automatic agent provisioning. This configuration is defined via CasC as everything else. To better control EC2 usage and costs, a few settings come in handy: minimum number of instances (up at all times) minimum number of spare instances (created to accommodate future jobs) instance cap: the maximum number of instances that can be provisioned at the same time idle termination time: how long agents should be kept alive after they have completed the job All of the above allow for proper scaling and a lot less maintenance compared to the iOS setup. A simple setup with 0 instances up at all times allows saving costs overnight and given that in our case the bootstrapping takes only 2 minutes, we can rely on the idle time setting. Conclusions Setting up an in-house CI is never a straightforward process and it requires several weeks of dedicated work. After years of waiting, Apple has announced Xcode Cloud which we believe will drastically change the landscape of continuous integration on iOS. The solution will most likely cause havoc for companies such as Bitrise and CircleCI and it’s reasonable to assume the pricing will be competitive compared to AWS, maybe running on custom hardware that only Apple is able to produce. A shift this big will take time to occur, so we foresee our solution to stay in use for quite some time. We hope to have inspired you on how a possible setup for mobile teams could be and informed you on what are the pros & cons of using EC2 mac instances.

iOS Monorepo & CI Pipelines
- iOS
- Monorepo
- Continuous Integration
- Jenkins
- Cocoapods
We have presented our modular iOS architecture in a previous article and I gave a talk at Swift Heroes 2020 about it. In this article, we’ll analyse the challenges we faced to have the modular architecture integrated with our CI pipelines and the reasoning behind migrating to a monorepo.
Originally published on the Just Eat Takeaway Engineering Blog. We have presented our modular iOS architecture in a previous article and I gave a talk at Swift Heroes 2020 about it. In this article, we’ll analyse the challenges we faced to have the modular architecture integrated with our CI pipelines and the reasoning behind migrating to a monorepo. The Problem Having several modules in separate repositories brings forward 2 main problems: Each module is versioned independently from the consuming app Each change involves at least 2 pull requests: 1 for the module and 1 for the integration in the app While the above was acceptable in a world where we had 2 different codebases, it soon became unnecessarily convoluted after we migrated to a new, global codebase. New module versions are implemented with the ultimate goal of being adopted by the only global codebase in use, making us realise we could simplify the change process. The monorepo approach has been discussed at length by the community for a few years now. Many talking points have come out of these conversations, even leading to an interesting story as told by Uber. In short, it entails putting all the code owned by the team in a single repository, precisely solving the 2 problems stated above. Monorepo structure The main advantage of a monorepo is a streamlined PR process that doesn’t require us to raise multiple PRs, de facto reducing the number of pull requests to one. It also simplifies the versioning, allowing module and app code (ultimately shipped together) to be aligned using the same versioning. The first step towards a monorepo was to move the content of the repositories of the modules to the main app repo (we’ll call it “monorepo” from now on). Since we rely on CocoaPods, the modules would be consumed as development pods. Here’s a brief summary of the steps used to migrate a module to the monorepo: Inform the relevant teams about the upcoming migration Make sure there are no open PRs in the module repo Make the repository read-only and archive it Copy the module to the Modules folder of the monorepo (it’s possible to merge 2 repositories to keep the history but we felt we wanted to keep the process simple, the old history is still available in the old repo anyway) Delete the module .git folder (or it would cause a git submodule) Remove Gemfile and Gemfile.lock fastlane folder, .gitignore file, sonar-project.properties, .swiftlint.yml so to use those in the monorepo Update the monorepo’s CODEOWNERS file with the module codeowners Remove the .github folder Modify the app Podfile to point to the module as a dev pod and install it Make sure all the modules’ demo apps in the monorepo refer to the new module as a dev pod (if they depend on it at all). The same applies to the module under migration. Delete the CI jobs related to the module Leave the podspecs in the private Specs repo (might be needed to build old versions of the app) The above assumes that CI is configured in a way that preserves the same integration steps upon a module change. We’ll discuss them later in this article. Not all the modules could be migrated to the monorepo, due to the fact the second-level dependencies need to live in separate repositories in order to be referenced in the podspec of a development pod. If not done correctly, CocoaPods will not be able to install them. We considered moving these dependencies to the monorepo whilst maintaining separate versioning, however, the main problem with this approach is that the version tags might conflict with the ones of the app. Even though CocoaPods supports tags that don’t respect semantic versioning (for example prepending the tag with the name of the module), violating it just didn’t feel right. EDIT: we’ve learned that it’s possible to move such dependencies to the monorepo. This is done not by defining :path=> in the podspecs but instead by doing so in the Podfile of the main app, which is all Cocoapods needs to work out the location of the dependency on disk. Swift Package Manager considerations We investigated the possibility of migrating from CocoaPods to Apple’s Swift Package Manager. Unfortunately, when it comes to handling the equivalent of development pods, Swift Package Manager really falls down for us. It turns out that Swift Package Manager only supports one package per repo, which is frustrating because the process of working with editable packages is surprisingly powerful and transparent. Version pinning rules While development pods don’t need to be versioned, other modules still need to. This is either because of their open-source nature or because they are second-level dependencies (referenced in other modules’ podspecs). Here’s a revised overview of the current modular architecture in 2021. We categorised our pods to better clarify what rules should apply when it comes to version pinning both in the Podfiles and in the podspecs. Open-Source pods Our open-source repositories on github.com/justeat are only used by the app. Examples: JustTweak, AutomationTools, Shock Pinning in other modules’ podspec: NOT APPLICABLE open-source pods don’t appear in any podspec, those that do are called ‘open-source shared’ Pinning in other modules’ Podfile (demo apps): PIN (e.g. AutomationTools in Orders demo app’s Podfile) Pinning in main app’s Podfile: PIN (e.g. AutomationTools) Open-Source shared pods The Just Eat pods we put open-source on github.com/justeat and are used by modules and apps. Examples: JustTrack, JustLog, ScrollingStackViewController, ErrorUtilities Pinning in other modules’ podspec: PIN w/ optimistic operator (e.g. JustTrack in Orders) Pinning in other modules’ Podfile (demo apps): PIN (e.g. JustTrack in Orders demo app’s Podfile) Pinning in main app’s Podfile: DON’T LIST latest compatible version is picked by CocoaPods (e.g. JustTrack). LIST & PIN if the pod is explicitly used in the app too, so we don’t magically inherit it. Internal Domain pods Domain modules (yellow). Examples: Orders, SERP, etc. Pinning in other modules’ podspec: NOT APPLICABLE domain pods don’t appear in other pods’ podspecs (domain modules don’t depend on other domain modules) Pinning in other modules’ Podfile (demo apps): PIN only if the pod is used in the app code, rarely the case (e.g. Account in Orders demo app’s Podfile) Pinning in main app’s Podfile: PIN (e.g. Orders) Internal Core pods Core modules (blue) minus those open-source. Examples: APIClient, AssetProvider Pinning in other modules’ podspec: NOT APPLICABLE core pods don’t appear in other pods’ podspecs (core modules are only used in the app(s)) Pinning in other modules’ Podfile (demo apps): PIN only if pod is used in the app code (e.g. APIClient in Orders demo app’s Podfile) Pinning in main app’s Podfile: PIN (e.g. NavigationEngine) Internal shared pods Shared modules (green) minus those open-source. Examples: JustUI, JustAnalytics Pinning in other modules’ podspec: DON’T PIN (e.g. JustUI in Orders podspec) Pinning in other modules’ Podfile (demo apps): PIN (e.g. JustUI in Orders demo app’s Podfile) Pinning in main app’s Podfile: PIN (e.g. JustUI) External shared pods Any non-Just Eat pod used by any internal or open-source pod. Examples: Usabilla, SDWebImage Pinning in other modules’ podspec: PIN (e.g. Usabilla in Orders) Pinning in other modules’ Podfile (demo apps): DON’T LIST because the version is forced by the podspec. LIST & PIN if the pod is explicitly used in the app too, so we don’t magically inherit it. Pinning is irrelevant but good practice. Pinning in main app’s Podfile: DON’T LIST because the version is forced by the podspec(s). LIST & PIN if the pod is explicitly used in the app too, so we don’t magically inherit it. Pinning is irrelevant but good practice. External pods Any non-Just Eat pod used by the app only. Examples: Instabug, GoogleAnalytics Pinning in other modules’ podspec: NOT APPLICABLE external pods don’t appear in any podspec, those that do are called ‘external shared’ Pinning in other modules’ Podfile (demo apps): PIN only if the pod is used in the app code, rarely the case (e.g. Promis) Pinning in main app’s Podfile: PIN (e.g. Adjust) Pinning is a good solution because it guarantees that we always build the same software regardless of new released versions of dependencies. It’s also true that pinning every dependency all the time makes the dependency graph hard to keep updated. This is why we decided to allow some flexibility in some cases. Following is some more reasoning. Open-source For “open-source shared” pods, we are optimistic enough (pun intended) to tolerate the usage of the optimistic operator ~> in podspecs of other pods (i.e Orders using JustTrack) so that when a new patch version is released, the consuming pod gets it for free upon running pod update. We have control over our code and, by respecting semantic versioning, we guarantee the consuming pod to always build. In case of new minor or major versions, we would have to update the podspecs of the consuming pods, which is appropriate. Also, we do need to list any “open-source shared” pod in the main app’s Podfile only if directly used by the app code. External We don’t have control over the “external” and “external shared” pods, therefore we always pin the version in the appropriate place. New patch versions might not respect semantic versioning for real and we don’t want to pull in new code unintentionally. As a rule of thumb, we prefer injecting external pods instead of creating a dependency in the podspec. Internal Internal shared pods could change frequently (not as much as domain modules). For this reason, we’ve decided to relax a constraint we had and not to pin the version in the podspec. This might cause the consuming pod to break when a new version of an “internal shared” pod is released and we run pod update. This is a compromise we can tolerate. The alternative would be to pin the version causing too much work to update the podspec of the domain modules. Continuous Integration changes With modules in separate repositories, the CI was quite simply replicating the same steps for each module: install pods run unit tests run UI tests generated code coverage submit code coverage to SonarQube Moving the modules to the monorepo meant creating smart CI pipelines that would run the same steps upon modules’ changes. If a pull request is to change only app code, there is no need to run any step for the modules, just the usual steps for the app: If instead, a pull request applies changes to one or more modules, we want the pipeline to first run the steps for the modules, and then the steps for the app: Even if there are no changes in the app code, module changes could likely impact the app behaviour, so it’s important to always run the app tests. We have achieved the above setup through constructing our Jenkins pipelines dynamically. The solution should scale when new modules are added to the monorepo and for this reason, it’s important that all modules: respect the same project setup (generated by CocoaPods w/ the pod lib create command) use the same naming conventions for the test schemes (UnitTests/ContractTests/UITests) make use of Apple Test Plans are in the same location ( ./Modules/ folder). Following is an excerpt of the code that constructs the modules’ stages from the Jenkinsfile used for pull request jobs. scripts = load "./Jenkins/scripts/scripts.groovy" def modifiedModules = scripts.modifiedModulesFromReferenceBranch(env.CHANGE_TARGET) def modulesThatNeedUpdating = scripts.modulesThatNeedUpdating(env.CHANGE_TARGET) def modulesToRun = (modulesThatNeedUpdating + modifiedModules).unique() sh "echo \"List of modules modified on this branch: ${modifiedModules}\"" sh "echo \"List of modules that need updating: ${modulesThatNeedUpdating}\"" sh "echo \"Pipeline will run the following modules: ${modulesToRun}\"" for (int i = 0; i < modulesToRun.size(); ++i) { def moduleName = modulesToRun[i] stage('Run pod install') { sh "bundle exec fastlane pod_install module:${moduleName}" } def schemes = scripts.testSchemesForModule(moduleName) schemes.each { scheme -> switch (scheme) { case "UnitTests": stage("${moduleName} Unit Tests") { sh "bundle exec fastlane module_unittests \ module_name:${moduleName} \ device:'${env.IPHONE_DEVICE}'" } stage("Generate ${moduleName} code coverage") { sh "bundle exec fastlane generate_sonarqube_coverage_xml" } stage("Submit ${moduleName} code coverage to SonarQube") { sh "bundle exec fastlane sonar_scanner_pull_request \ component_type:'module' \ source_branch:${env.BRANCH_NAME} \ target_branch:${env.CHANGE_TARGET} \ pull_id:${env.CHANGE_ID} \ project_key:'ios-${moduleName}' \ project_name:'iOS ${moduleName}' \ sources_path:'./Modules/${moduleName}/${moduleName}'" } break; case "ContractTests": stage('Install pact mock service') { sh "bundle exec fastlane install_pact_mock_service" } stage("${moduleName} Contract Tests") { sh "bundle exec fastlane module_contracttests \ module_name:${moduleName} \ device:'${env.IPHONE_DEVICE}'" } break; case "UITests": stage("${moduleName} UI Tests") { sh "bundle exec fastlane module_uitests \ module_name:${moduleName} \ number_of_simulators:${env.NUMBER_OF_SIMULATORS} \ device:'${env.IPHONE_DEVICE}'" } break; default: break; } } } and here are the helper functions to make it all work: def modifiedModulesFromReferenceBranch(String referenceBranch) { def script = "git diff --name-only remotes/origin/${referenceBranch}" def filesChanged = sh script: script, returnStdout: true Set modulesChanged = [] filesChanged.tokenize("\n").each { def components = it.split('/') if (components.size() > 1 && components[0] == 'Modules') { def module = components[1] modulesChanged.add(module) } } return modulesChanged } def modulesThatNeedUpdating(String referenceBranch) { def modifiedModules = modifiedModulesFromReferenceBranch(referenceBranch) def allModules = allMonorepoModules() def modulesThatNeedUpdating = [] for (module in allModules) { def podfileLockPath = "Modules/${module}/Example/Podfile.lock" def dependencies = podfileDependencies(podfileLockPath) def dependenciesIntersection = dependencies.intersect(modifiedModules) as TreeSet Boolean moduleNeedsUpdating = (dependenciesIntersection.size() > 0) if (moduleNeedsUpdating == true && modifiedModules.contains(module) == false) { modulesThatNeedUpdating.add(module) } } return modulesThatNeedUpdating } def podfileDependencies(String podfileLockPath) { def dependencies = [] def fileContent = readFile(file: podfileLockPath) fileContent.tokenize("\n").each { line -> def lineComponents = line.split('\\(') if (lineComponents.length > 1) { def dependencyLineSubComponents = lineComponents[0].split('-') if (dependencyLineSubComponents.length > 1) { def moduleName = dependencyLineSubComponents[1].trim() dependencies.add(moduleName) } } } return dependencies } def allMonorepoModules() { def modulesList = sh script: "ls Modules", returnStdout: true return modulesList.tokenize("\n").collect { it.trim() } } def testSchemesForModule(String moduleName) { def script = "xcodebuild -project ./Modules/${moduleName}/Example/${moduleName}.xcodeproj -list" def projectEntitites = sh script: script, returnStdout: true def schemesPart = projectEntitites.split('Schemes:')[1] def schemesPartLines = schemesPart.split(/\n/) def trimmedLined = schemesPartLines.collect { it.trim() } def filteredLines = trimmedLined.findAll { !it.allWhitespace } def allowedSchemes = ['UnitTests', 'ContractTests', 'UITests'] def testSchemes = filteredLines.findAll { allowedSchemes.contains(it) } return testSchemes } You might have noticed the modulesThatNeedUpdating method in the code above. Each module comes with a demo app using the dependencies listed in its Podfile and it’s possible that other monorepo modules are listed there as development pods. This not only means that we have to run the steps for the main app, but also the steps for every module consuming modules that show changes. For example, the Orders demo app uses APIClient, meaning that pull requests with changes in APIClient will generate pipelines including the Orders steps. Pipeline parallelization Something we initially thought was sensible to consider is the parallelisation of the pipelines across different nodes. We use parallelisation for the release pipelines and learned that, while it seems to be a fundamental requirement at first, it soon became apparent not to be so desirable nor truly fundamental for the pull requests pipeline. We’ll discuss our CI setup in a separate article, but suffice to say that we have aggressively optimized it and managed to reduce the agent pool from 10 to 5, still maintaining a good level of service. Parallelisation sensibly complicates the Jenkinsfiles and their maintainability, spreads the cost of checking out the repository across nodes and makes the logs harder to read. The main benefit would come from running the app UI tests on different nodes. In the WWDC session 413, Apple recommends generating the .xctestrun file using the build-for-testing option in xcodebuild and distribute it across the other nodes. Since our app is quite large, such file is also large and transferring it has its costs, both in time and bandwidth usage. All things considered, we decided to keep the majority of our pipelines serial. EDIT: In 2022 we have parallelised our PR pipeline in 4 branches: Validation steps (linting, Fastlane lanes tests, etc.) App unit tests App UI tests (short enough that there's no need to share .xctestrun across nodes) Modified modules unit tests Modified modules UI tests Conclusions We have used the setup described in this article since mid-2020 and we are very satisfied with it. We discussed the pipeline used for the pull requests which is the most relevant one when it comes to embracing a monorepo structure. We have a few more pipelines for various use cases, such as verifying changes in release branches, keeping the code coverage metrics up-to-date with jobs running of triggers, archiving the app for internal usage and for App Store. We hope to have given you some useful insights on how to structure a monorepo and its CI pipelines, especially if you have a structure similar to ours.

The algorithm powering iHarmony
- music
- chords
- scales
- iOS
- swift
- App Store
Problem
I wrote the first version of iHarmony in 2008. It was the very first iOS app I gave birth to, combining my passion for music and programming. I remember buying an iPhone and my first Mac with the precise purpose of jumping on the apps train at a time
Problem I wrote the first version of iHarmony in 2008. It was the very first iOS app I gave birth to, combining my passion for music and programming. I remember buying an iPhone and my first Mac with the precise purpose of jumping on the apps train at a time when it wasn't clear if the apps were there to stay or were just a temporary hype. But I did it, dropped my beloved Ubuntu to join a whole new galaxy. iHarmony was also one of the first 2000 apps on the App Store. Up until the recent rewrite, iHarmony was powered by a manually crafted database containing scales, chords, and harmonization I inputted. What-a-shame! I guess it made sense, I wanted to learn iOS and not to focus on implementing some core logic independent from the platform. Clearly a much better and less error-prone way to go would be to implement an algorithm to generate all the entries based on some DSL/spec. It took me almost 12 years to decide to tackle the problem and I've recently realized that writing the algorithm I wanted was harder than I thought. Also thought was a good idea give SwiftUI a try since the UI of iHarmony is extremely simple but... nope. Since someone on the Internet expressed interest 😉, I wrote this article to explain how I solved the problem of modeling music theory concepts in a way that allows the generation of any sort of scales, chords, and harmonization. I only show the code needed to get a grasp of the overall structure. I know there are other solutions ready to be used on GitHub but, while I don't particularly like any of them, the point of rewriting iHarmony from scratch was to challenge myself, not to reuse code someone else wrote. Surprisingly to me, getting to the solution described here took me 3 rewrites and 2 weeks. Solution The first fundamental building blocks to model are surely the musical notes, which are made up of a natural note and an accidental. enum NaturalNote: String { case C, D, E, F, G, A, B } enum Accidental: String { case flatFlatFlat = "bbb" case flatFlat = "bb" case flat = "b" case natural = "" case sharp = "#" case sharpSharp = "##" case sharpSharpSharp = "###" func applyAccidental(_ accidental: Accidental) throws -> Accidental {...} } struct Note: Hashable, Equatable { let naturalNote: NaturalNote let accidental: Accidental ... static let Dff = Note(naturalNote: .D, accidental: .flatFlat) static let Df = Note(naturalNote: .D, accidental: .flat) static let D = Note(naturalNote: .D, accidental: .natural) static let Ds = Note(naturalNote: .D, accidental: .sharp) static let Dss = Note(naturalNote: .D, accidental: .sharpSharp) ... func noteByApplyingAccidental(_ accidental: Accidental) throws -> Note {...} } Combinations of notes make up scales and chords and they are... many. What's fixed instead in music theory, and therefore can be hard-coded, are the keys (both major and minor) such as: C major: C, D, E, F, G, A, B A minor: A, B, C, D, E, F, G D major: D, E, F#, G, A, B, C# We'll get back to the keys later, but we can surely implement the note sequence for each musical key. typealias NoteSequence = [Note] extension NoteSequence { static let C = [Note.C, Note.D, Note.E, Note.F, Note.G, Note.A, Note.B] static let A_min = [Note.A, Note.B, Note.C, Note.D, Note.E, Note.F, Note.G] static let G = [Note.G, Note.A, Note.B, Note.C, Note.D, Note.E, Note.Fs] static let E_min = [Note.E, Note.Fs, Note.G, Note.A, Note.B, Note.C, Note.D] ... } Next stop: intervals. They are a bit more interesting as not every degree has the same types. Let's split into 2 sets: 2nd, 3rd, 6th and 7th degrees can be minor, major, diminished and augmented 1st (and 8th), 4th and 5th degrees can be perfect, diminished and augmented. We need to use different kinds of "diminished" and "augmented" for the 2 sets as later on we'll have to calculate the accidentals needed to turn an interval into another. Some examples: to get from 2nd augmented to 2nd diminished, we need a triple flat accidental (e.g. in C major scale, from D♯ to D♭♭ there are 3 semitones) to get from 5th augmented to 5th diminished, we need a double flat accidental (e.g. in C major scale, from G♯ to G♭there are 2 semitones) We proceed to hard-code the allowed intervals in music, leaving out the invalid ones (e.g. Interval(degree: ._2, type: .augmented)) enum Degree: Int, CaseIterable { case _1, _2, _3, _4, _5, _6, _7, _8 } enum IntervalType: Int, RawRepresentable { case perfect case minor case major case diminished case augmented case minorMajorDiminished case minorMajorAugmented } struct Interval: Hashable, Equatable { let degree: Degree let type: IntervalType static let _1dim = Interval(degree: ._1, type: .diminished) static let _1 = Interval(degree: ._1, type: .perfect) static let _1aug = Interval(degree: ._1, type: .augmented) static let _2dim = Interval(degree: ._2, type: .minorMajorDiminished) static let _2min = Interval(degree: ._2, type: .minor) static let _2maj = Interval(degree: ._2, type: .major) static let _2aug = Interval(degree: ._2, type: .minorMajorAugmented) ... static let _4dim = Interval(degree: ._4, type: .diminished) static let _4 = Interval(degree: ._4, type: .perfect) static let _4aug = Interval(degree: ._4, type: .augmented) ... static let _7dim = Interval(degree: ._7, type: .minorMajorDiminished) static let _7min = Interval(degree: ._7, type: .minor) static let _7maj = Interval(degree: ._7, type: .major) static let _7aug = Interval(degree: ._7, type: .minorMajorAugmented) } Now it's time to model the keys (we touched on them above already). What's important is to define the intervals for all of them (major and minor ones). enum Key { // natural case C, A_min // sharp case G, E_min case D, B_min case A, Fs_min case E, Cs_min case B, Gs_min case Fs, Ds_min case Cs, As_min // flat case F, D_min case Bf, G_min case Ef, C_min case Af, F_min case Df, Bf_min case Gf, Ef_min case Cf, Af_min ... enum KeyType { case naturalMajor case naturalMinor case flatMajor case flatMinor case sharpMajor case sharpMinor } var type: KeyType { switch self { case .C: return .naturalMajor case .A_min: return .naturalMinor case .G, .D, .A, .E, .B, .Fs, .Cs: return .sharpMajor case .E_min, .B_min, .Fs_min, .Cs_min, .Gs_min, .Ds_min, .As_min: return .sharpMinor case .F, .Bf, .Ef, .Af, .Df, .Gf, .Cf: return .flatMajor case .D_min, .G_min, .C_min, .F_min, .Bf_min, .Ef_min, .Af_min: return .flatMinor } } var intervals: [Interval] { switch type { case .naturalMajor, .flatMajor, .sharpMajor: return [ ._1, ._2maj, ._3maj, ._4, ._5, ._6maj, ._7maj ] case .naturalMinor, .flatMinor, .sharpMinor: return [ ._1, ._2maj, ._3min, ._4, ._5, ._6min, ._7min ] } } var notes: NoteSequence { switch self { case .C: return .C case .A_min: return .A_min ... } } At this point we have all the fundamental building blocks and we can proceed with the implementation of the algorithm. The idea is to have a function that given a key a root interval a list of intervals it works out the list of notes. In terms of inputs, it seems the above is all we need to correctly work out scales, chords, and - by extension - also harmonizations. Mind that the root interval doesn't have to be part of the list of intervals, that is simply the interval to start from based on the given key. Giving a note as a starting point is not good enough since some scales simply don't exist for some notes (e.g. G♯ major scale doesn't exist in the major key, and G♭minor scale doesn't exist in any minor key). Before progressing to the implementation, please consider the following unit tests that should make sense to you: func test_noteSequence_C_1() { let key: Key = .C let noteSequence = try! engine.noteSequence(customKey: key.associatedCustomKey, intervals: [._1, ._2maj, ._3maj, ._4, ._5, ._6maj, ._7maj]) let expectedValue: NoteSequence = [.C, .D, .E, .F, .G, .A, .B] XCTAssertEqual(noteSequence, expectedValue) } func test_noteSequence_withRoot_C_3maj_majorScaleIntervals() { let key = Key.C let noteSequence = try! engine.noteSequence(customKey: key.associatedCustomKey, rootInterval: ._3maj, intervals: [._1, ._2maj, ._3maj, ._4, ._5, ._6maj, ._7maj]) let expectedValue: NoteSequence = [.E, .Fs, .Gs, .A, .B, .Cs, .Ds] XCTAssertEqual(noteSequence, expectedValue) } func test_noteSequence_withRoot_Gsmin_3maj_alteredScaleIntervals() { let key = Key.Gs_min let noteSequence = try! engine.noteSequence(customKey: key.associatedCustomKey, rootInterval: ._3maj, intervals: [._1aug, ._2maj, ._3dim, ._4dim, ._5aug, ._6dim, ._7dim]) let expectedValue: NoteSequence = [.Bs, .Cs, .Df, .Ef, .Fss, .Gf, .Af] XCTAssertEqual(noteSequence, expectedValue) } and here is the implementation. Let's consider a simple case, so it's easier to follow: key = C major root interval = 3maj interval = major scale interval (1, 2maj, 3min, 4, 5, 6maj, 7min) if you music theory allowed you to understand the above unit tests, you would expect the output to be: E, F♯, G, A, B, C♯, D (which is a Dorian scale). Steps: we start by shifting the notes of the C key to position the 3rd degree (based on the 3maj) as the first element of the array, getting the note sequence E, F, G, A, B, C, D; here's the first interesting bit: we then get the list of intervals by calculating the number of semitones from the root to any other note in the sequence and working out the corresponding Interval: 1_perfect, 2_minor, 3_minor, 4_perfect, 5_perfect, 6_minor, 7_minor; we now have all we need to create a CustomKey which is pretty much a Key (with notes and intervals) but instead of being an enum with pre-defined values, is a struct; here's the second tricky part: return the notes by mapping the input intervals. Applying to each note in the custom key the accidental needed to match the desired interval. In our case, the only 2 intervals to 'adjust' are the 2nd and the 6th intervals, both minor in the custom key but major in the list of intervals. So we have to apply a sharp accidental to 'correct' them. 👀 I've used force unwraps in these examples for simplicity, the code might already look complex by itself. class CoreEngine { func noteSequence(customKey: CustomKey, rootInterval: Interval = ._1, intervals: [Interval]) throws -> NoteSequence { // 1. let noteSequence = customKey.shiftedNotes(by: rootInterval.degree) let firstNoteInShiftedSequence = noteSequence.first! // 2. let adjustedIntervals = try noteSequence.enumerated().map { try interval(from: firstNoteInShiftedSequence, to: $1, targetDegree: Degree(rawValue: $0)!) } // 3. let customKey = CustomKey(notes: noteSequence, intervals: adjustedIntervals) // 4. return try intervals.map { let referenceInterval = customKey.firstIntervalWithDegree($0.degree)! let note = customKey.notes[$0.degree.rawValue] let accidental = try referenceInterval.type.accidental(to: $0.type) return try note.noteByApplyingAccidental(accidental) } } } It's worth showing the implementation of the methods used above: private func numberOfSemitones(from sourceNote: Note, to targetNote: Note) -> Int { let notesGroupedBySameTone: [[Note]] = [ [.C, .Bs, .Dff], [.Cs, .Df, .Bss], [.D, .Eff, .Css], [.Ds, .Ef, .Fff], [.E, .Dss, .Ff], [.F, .Es, .Gff], [.Fs, .Ess, .Gf], [.G, .Fss, .Aff], [.Gs, .Af], [.A, .Gss, .Bff], [.As, .Bf, .Cff], [.B, .Cf, .Ass] ] let startIndex = notesGroupedBySameTone.firstIndex { $0.contains(sourceNote)}! let endIndex = notesGroupedBySameTone.firstIndex { $0.contains(targetNote)}! return endIndex >= startIndex ? endIndex - startIndex : (notesGroupedBySameTone.count - startIndex) + endIndex } private func interval(from sourceNote: Note, to targetNote: Note, targetDegree: Degree) throws -> Interval { let semitones = numberOfSemitones(from: sourceNote, to: targetNote) let targetType: IntervalType = try { switch targetDegree { case ._1, ._8: return .perfect ... case ._4: switch semitones { case 4: return .diminished case 5: return .perfect case 6: return .augmented default: throw CustomError.invalidConfiguration ... case ._7: switch semitones { case 9: return .minorMajorDiminished case 10: return .minor case 11: return .major case 0: return .minorMajorAugmented default: throw CustomError.invalidConfiguration } } }() return Interval(degree: targetDegree, type: targetType) } the Note's noteByApplyingAccidental method: func noteByApplyingAccidental(_ accidental: Accidental) throws -> Note { let newAccidental = try self.accidental.apply(accidental) return Note(naturalNote: naturalNote, accidental: newAccidental) } and the Accidental's apply method: func apply(_ accidental: Accidental) throws -> Accidental { switch (self, accidental) { ... case (.flat, .flatFlatFlat): throw CustomError.invalidApplicationOfAccidental case (.flat, .flatFlat): return .flatFlatFlat case (.flat, .flat): return .flatFlat case (.flat, .natural): return .flat case (.flat, .sharp): return .natural case (.flat, .sharpSharp): return .sharp case (.flat, .sharpSharpSharp): return .sharpSharp case (.natural, .flatFlatFlat): return .flatFlatFlat case (.natural, .flatFlat): return .flatFlat case (.natural, .flat): return .flat case (.natural, .natural): return .natural case (.natural, .sharp): return .sharp case (.natural, .sharpSharp): return .sharpSharp case (.natural, .sharpSharpSharp): return .sharpSharpSharp ... } With the above engine ready (and 💯﹪ unit tested!), we can now proceed to use it to work out what we ultimately need (scales, chords, and harmonizations). extension CoreEngine { func scale(note: Note, scaleIdentifier: Identifier) throws -> NoteSequence {...} func chord(note: Note, chordIdentifier: Identifier) throws -> NoteSequence {...} func harmonization(key: Key, harmonizationIdentifier: Identifier) throws -> NoteSequence {...} func chordSignatures(note: Note, scaleHarmonizationIdentifier: Identifier) throws -> [ChordSignature] {...} func harmonizations(note: Note, scaleHarmonizationIdentifier: Identifier) throws -> [NoteSequence] {...} } Conclusions There's more to it but with this post I only wanted to outline the overall idea. The default database is available on GitHub at albertodebortoli/iHarmonyDB. The format used is JSON and the community can now easily suggest additions. Here is how the definition of a scale looks: "scale_dorian": { "group": "group_scales_majorModes", "isMode": true, "degreeRelativeToMain": 2, "inclination": "minor", "intervals": [ "1", "2maj", "3min", "4", "5", "6maj", "7min" ] } and a chord: "chord_diminished": { "group": "group_chords_diminished", "abbreviation": "dim", "intervals": [ "1", "3min", "5dim" ] } and a harmonization: "scaleHarmonization_harmonicMajorScale4Tones": { "group": "group_harmonization_harmonic_major", "inclination": "major", "harmonizations": [ "harmonization_1_major7plus", "harmonization_2maj_minor7dim5", "harmonization_3maj_minor7", "harmonization_4_minor7plus", "harmonization_5_major7", "harmonization_6min_major7plus5sharp", "harmonization_7maj_diminished7" ] } Have to say, I'm pretty satisfied with how extensible this turned out to be. Thanks for reading 🎶
The iOS internationalization basics I keep forgetting
- iOS
- formatting
- date
- currency
- timezone
- locale
- language
Localizations, locales, timezones, date and currency formatting... it's shocking how easy is to forget how they work and how to use them correctly. In this article, I try to summarize the bare minimum one needs to know to add internationalization support to an iOS app.
In this article, I try to summarize the bare minimum one needs to know to add internationalization support to an iOS app. Localizations, locales, timezones, date and currency formatting... it's shocking how easy is to forget how they work and how to use them correctly. After years more than 10 years into iOS development, I decided to write down a few notes on the matter, with the hope that they will come handy again in the future, hopefully not only to me. TL;DR From Apple docs: Date: a specific point in time, independent of any calendar or time zone; TimeZone: information about standard time conventions associated with a specific geopolitical region; Locale: information about linguistic, cultural, and technological conventions for use in formatting data for presentation. Rule of thumb: All DateFormatters should use the locale and the timezone of the device; All NumberFormatter, in particular those with numberStyle set to .currency (for the sake of this article) should use a specific locale so that prices are not shown in the wrong currency. General notes on formatters Let's start by stating the obvious. Since iOS 10, Foundation (finally) provides ISO8601DateFormatter, which, alongside with DateFormatter and NumberFormatter, inherits from Formatter. Formatter locale property timeZone property ISO8601DateFormatter ❌ ✅ DateFormatter ✅ ✅ NumberFormatter ✅ ❌ In an app that only consumes data from an API, the main purpose of ISO8601DateFormatter is to convert strings to dates (String -> Date) more than the inverse. DateFormatter is then used to format dates (Date -> String) to ultimately show the values in the UI. NumberFormatter instead, converts numbers (prices in the vast majority of the cases) to strings (NSNumber/Decimal -> String). Formatting dates 🕗 🕝 🕟 It seems the following 4 are amongst the most common ISO 8601 formats, including the optional UTC offset. A: 2019-10-02T16:53:42 B: 2019-10-02T16:53:42Z C: 2019-10-02T16:53:42-02:00 D: 2019-10-02T16:53:42.974Z In this article I'll stick to these formats. The 'Z' at the end of an ISO8601 date indicates that it is in UTC, not a local time zone. Locales Converting strings to dates (String -> Date) is done using ISO8601DateFormatter objects set up with various formatOptions. Once we have a Date object, we can deal with the formatting for the presentation. Here, the locale is important and things can get a bit tricky. Locales have nothing to do with timezones, locales are for applying a format using a language/region. Locale identifiers are in the form of <language_identifier>_<region_identifier> (e.g. en_GB). We should use the user's locale when formatting dates (Date -> String). Consider a British user moving to Italy, the apps should keep showing a UI localized in English, and the same applies to the dates that should be formatted using the en_GB locale. Using the it_IT locale would show "2 ott 2019, 17:53" instead of the correct "2 Oct 2019 at 17:53". Locale.current, shows the locale set (overridden) in the iOS simulator and setting the language and regions in the scheme's options comes handy for debugging. Some might think that it's acceptable to use Locale.preferredLanguages.first and create a Locale from it with let preferredLanguageLocale = Locale(identifier: Locale.preferredLanguages.first!) and set it on the formatters. I think that doing so is not great since we would display dates using the Italian format but we won't necessarily be using the Italian language for the other UI elements as the app might not have the IT localization, causing an inconsistent experience. In short: don't use preferredLanguages, best to use Locale.current. Apple strongly suggests using en_US_POSIX pretty much everywhere (1, 2). From Apple docs: [...] if you're working with fixed-format dates, you should first set the locale of the date formatter to something appropriate for your fixed format. In most cases the best locale to choose is "en_US_POSIX", a locale that's specifically designed to yield US English results regardless of both user and system preferences. "en_US_POSIX" is also invariant in time (if the US, at some point in the future, changes the way it formats dates, "en_US" will change to reflect the new behaviour, but "en_US_POSIX" will not), and between machines ("en_US_POSIX" works the same on iOS as it does on OS X, and as it it does on other platforms). Once you've set "en_US_POSIX" as the locale of the date formatter, you can then set the date format string and the date formatter will behave consistently for all users. I couldn't find a really valid reason for doing so and quite frankly using the device locale seems more appropriate for converting dates to strings. Here is the string representation for the same date using different locales: en_US_POSIX: May 2, 2019 at 3:53 PM en_GB: 2 May 2019 at 15:53 it_IT: 2 mag 2019, 15:53 The above should be enough to show that en_US_POSIX is not what we want to use in this case, but it has more to do with maintaining a standard for communication across machines. From this article: "[...] Unless you specifically need month and/or weekday names to appear in the user's language, you should always use the special locale of en_US_POSIX. This will ensure your fixed format is actually fully honored and no user settings override your format. This also ensures month and weekday names appear in English. Without using this special locale, you may get 24-hour format even if you specify 12-hour (or visa-versa). And dates sent to a server almost always need to be in English." Timezones Stating the obvious one more time: Greenwich Mean Time (GMT) is a time zone while Coordinated Universal Time (UTC) is a time standard. There is no time difference between them. Timezones are fundamental to show the correct date/time in the final text shown to the user. The timezone value is taken from macOS and the iOS simulator inherits it, meaning that printing TimeZone.current, shows the timezone set in the macOS preferences (e.g. Europe/Berlin). Show me some code Note that in the following example, we use GMT (Greenwich Mean Time) and CET (Central European Time), which is GMT+1. Mind that it's best to reuse formatters since the creation is expensive. class CustomDateFormatter { private let dateFormatter: DateFormatter = { let dateFormatter = DateFormatter() dateFormatter.dateStyle = .medium dateFormatter.timeStyle = .short return dateFormatter }() private let locale: Locale private let timeZone: TimeZone init(locale: Locale = .current, timeZone: TimeZone = .current) { self.locale = locale self.timeZone = timeZone } func string(from date: Date) -> String { dateFormatter.locale = locale dateFormatter.timeZone = timeZone return dateFormatter.string(from: date) } } let stringA = "2019-11-02T16:53:42" let stringB = "2019-11-02T16:53:42Z" let stringC = "2019-11-02T16:53:42-02:00" let stringD = "2019-11-02T16:53:42.974Z" // The ISO8601DateFormatter's extension (redacted) // internally uses multiple formatters, each one set up with different // options (.withInternetDateTime, .withFractionalSeconds, withFullDate, .withTime, .withColonSeparatorInTime) // to be able to parse all the formats. // timeZone property is set to GMT. let dateA = ISO8601DateFormatter.date(from: stringA)! let dateB = ISO8601DateFormatter.date(from: stringB)! let dateC = ISO8601DateFormatter.date(from: stringC)! let dateD = ISO8601DateFormatter.date(from: stringD)! var dateFormatter = CustomDateFormatter(locale: Locale(identifier: "en_GB"), timeZone: TimeZone(identifier: "GMT")!) dateFormatter.string(from: dateA) // 2 Nov 2019 at 16:53 dateFormatter.string(from: dateB) // 2 Nov 2019 at 16:53 dateFormatter.string(from: dateC) // 2 Nov 2019 at 18:53 dateFormatter.string(from: dateD) // 2 Nov 2019 at 16:53 dateFormatter = CustomDateFormatter(locale: Locale(identifier: "it_IT"), timeZone: TimeZone(identifier: "CET")!) dateFormatter.string(from: dateA) // 2 nov 2019, 17:53 dateFormatter.string(from: dateB) // 2 nov 2019, 17:53 dateFormatter.string(from: dateC) // 2 nov 2019, 19:53 dateFormatter.string(from: dateD) // 2 nov 2019, 17:53 Using the CET timezone also for ISO8601DateFormatter, the final string produced for dateA would respectively be "15:53" when formatted with GMT and "16:53" when formatted with CET. As long as the string passed to ISO8601DateFormatter is in UTC, it's irrelevant to set the timezone on the formatter. Apple suggests to set the timeZone property to UTC with TimeZone(secondsFromGMT: 0), but this is irrelevant if the string representing the date already includes the timezone. If your server returns a string representing a date that is not in UTC, it's probably because of one of the following 2 reasons: it's not meant to be in UTC (questionable design decision indeed) and therefore the timezone of the device should be used instead; the backend developers implemented it wrong and they should add the 'Z 'at the end of the string if what they intended is to have the date in UTC. In short: All DateFormatters should have timezone and locale set to .current and avoid handling non-UTC string if possible. Formatting currencies € $ ¥ £ The currency symbol and the formatting of a number should be defined via a Locale, and they shouldn't be set/changed on the NumberFormatter. Don't use the user's locale (Locale.current) because it could be set to a region not supported by the app. Let's consider the example of a user's locale to be en_US, and the app to be available only for the Italian market. We must set a locale Locale(identifier: "it_IT") on the formatter, so that: prices will be shown only in Euro (not American Dollar) the format used will be the one of the country language (for Italy, "12,34 €", not any other variation such as "€12.34") class CurrencyFormatter { private let locale: Locale init(locale: Locale = .current) { self.locale = locale } func string(from decimal: Decimal, overriddenCurrencySymbol: String? = nil) -> String { let formatter = NumberFormatter() formatter.numberStyle = .currency if let currencySymbol = overriddenCurrencySymbol { // no point in doing this on a NumberFormatter ❌ formatter.currencySymbol = currencySymbol } formatter.locale = locale return formatter.string(from: decimal as NSNumber)! } } let itCurrencyFormatter = CurrencyFormatter(locale: Locale(identifier: "it_IT")) let usCurrencyFormatter = CurrencyFormatter(locale: Locale(identifier: "en_US")) let price1 = itCurrencyFormatter.string(from: 12.34) // "12,34 €" ✅ let price2 = usCurrencyFormatter.string(from: 12.34) // "$12.34" ✅ let price3 = itCurrencyFormatter.string(from: 12.34, overriddenCurrencySymbol: "₿") // "12,34 ₿" ❌ let price4 = usCurrencyFormatter.string(from: 12.34, overriddenCurrencySymbol: "₿") // "₿ 12.34" ❌ In short: All NumberFormatters should have the locale set to the one of the country targeted and no currencySymbol property overridden (it's inherited from the locale). Languages 🇬🇧 🇮🇹 🇳🇱 Stating the obvious one more time, but there are very rare occasions that justify forcing the language in the app: func setLanguage(_ language: String) { let userDefaults = UserDefaults.standard userDefaults.set([language], forKey: "AppleLanguages") } The above circumvents the Apple localization mechanism and needs an app restart, so don't do it and localize the app by the book: add localizations in Project -> Localizations; create a Localizable.strings file and tap the localize button in the inspector; always use NSLocalizedString() in code. Let's consider this content of Localizable.strings (English): "kHello" = "Hello"; "kFormatting" = "Some formatting 1. %@ 2. %d."; and this for another language (e.g. Italian) Localizable.strings (Italian): "kHello" = "Ciao"; "kFormatting" = "Esempio di formattazione 1) %@ 2) %d."; Simple localization Here's the trivial example: let localizedString = NSLocalizedString("kHello", comment: "") If Locale.current.languageCode is it, the value would be 'Ciao', and 'Hello' otherwise. Formatted localization For formatted strings, use the following: let stringWithFormats = NSLocalizedString("kFormatting", comment: "") String.localizedStringWithFormat(stringWithFormats, "some value", 3) As before, if Locale.current.languageCode is it, value would be 'Esempio di formattazione 1) some value 2) 3.', and 'Some formatting 1) some value 2) 3.' otherwise. Plurals localization For plurals, create a Localizable.stringsdict file and tap the localize button in the inspector. Localizable.strings and Localizable.stringsdict are independent, so there are no cross-references (something that often tricked me). Here is a sample content: <dict> <key>kPlurality</key> <dict> <key>NSStringLocalizedFormatKey</key> <string>Interpolated string: %@, interpolated number: %d, interpolated variable: %#@COUNT@.</string> <key>COUNT</key> <dict> <key>NSStringFormatSpecTypeKey</key> <string>NSStringPluralRuleType</string> <key>NSStringFormatValueTypeKey</key> <string>d</string> <key>zero</key> <string>nothing</string> <key>one</key> <string>%d object</string> <key>two</key> <string></string> <key>few</key> <string></string> <key>many</key> <string></string> <key>other</key> <string>%d objects</string> </dict> </dict> </dict> Localizable.stringsdict undergo the same localization mechanism of its companion Localizable.strings. It's mandatory to only implement 'other', but an honest minimum includes 'zero', 'one', and 'other'. Given the above content, the following code should be self-explanatory: let localizedHello = NSLocalizedString("kHello", comment: "") // from Localizable.strings let stringWithPlurals = NSLocalizedString("kPlurality", comment: "") // from Localizable.stringsdict String.localizedStringWithFormat(stringWithPlurals, localizedHello, 42, 1) With the en language, the value would be 'Interpolated string: Hello, interpolated number: 42, interpolated variable: 1 object.'. Use the scheme's option to run with a specific Application Language (it will change the current locale language and therefore also the output of the DateFormatters). If the language we've set or the device language are not supported by the app, the system falls back to en. References https://en.wikipedia.org/wiki/ISO_8601 https://nsdateformatter.com/ https://foragoodstrftime.com/ https://epochconverter.com/ So... that's all folks. 🌍

Modular iOS Architecture @ Just Eat
- iOS
- Just Eat
- architecture
- modulrization
- Cocoapods
The journey towards a modular architecture taken by the Just Eat iOS team.
The journey we took to restructure our mobile apps towards a modular architecture. Originally published on the Just Eat Engineering Blog. Overview Modular mobile architectures have been a hot topic over the past 2 years, counting a plethora of articles and conference talks. Almost every big company promoted and discussed modularization publicly as a way to scale big projects. At Just Eat, we jumped on the modular architecture train probably before it was mainstream and, as we'll discuss in this article, the root motivation was quite peculiar in the industry. Over the years (2016-2019), we've completely revamped our iOS products from the ground up and learned a lot during this exciting and challenging journey. There is so much to say about the way we structured our iOS stack that it would probably deserve a series of articles, some of which have previously been posted. Here we summarize the high-level iOS architecture we crafted, covering the main aspects in a way concise enough for the reader to get a grasp of them and hopefully learn some valuable tips. Modular Architecture Lots of information can be found online on modular architectures. In short: A modular architecture is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each one contains everything necessary to execute only one aspect of the desired functionality. Note that modular design applies to the code you own. A project with several third-party dependencies but no sensible separation for the code written by your team is not considered modular. A modular design is more about the principle rather than the specific technology. One could achieve it in a variety of ways and with different tools. Here are some key points and examples that should inform the decision of the ifs and the hows of implementing modularization: Business reasons The company requires that parts of the codebase are reused and shared across projects, products, and teams; The company requires multiple products to be unified into a single one. Tech reasons The codebase has grown to a state where things become harder and harder to maintain and to iterate over; Development is slowed down due to multiple developers working on the same monolithic codebase; Besides reusing code, you need to port functionalities across projects/products. Multiple teams The company structured teams following strategic models (e.g. Spotify model) and functional teams only work on a subset of the final product; Ownership of small independent modules distributed across teams enables faster iterations; The much smaller cognitive overhead of working on a smaller part of the whole product can vastly simplify the overall development. Pre-existing knowledge Members of the team might already be familiar with specific solutions (Carthage, CocoaPods, Swift Package Manager, manual frameworks setup within Xcode). In the case of a specific familiarity with a system, it's recommended to start with it since all solutions come with pros and cons and there's not a clear winner at the time of writing. Modularizing code (if done sensibly) is almost always a good thing: it enforces separation of concerns, keeps complexity under control, allows faster development, etc. It has to be said that it's not necessarily what one needs for small projects and its benefits become tangible only after a certain complexity threshold is crossed. Journey to a new architecture In 2014, Just Eat was a completely different environment from today and back then the business decided to split the tech department into separate departments: one for UK and one for the other countries. While this was done with the best intentions to allow faster evolution in the main market (UK), it quickly created a hard division between teams, services, and people. In less than 6 months, the UK and International APIs and consumer clients deeply diverged introducing country-specific logic and behaviors. By mid-2016 the intent of "merging back" into a single global platform was internally announced and at that time it almost felt like a company acquisition. This is when we learned the importance of integrating people before technology. The teams didn’t know each other very well and became reasonably territorial on their codebase. It didn’t help that the teams span multiple cities. It's understandable that getting to an agreement on how going back to a single, global, and unified platform took months. The options we considered spanned from rewriting the product from scratch to picking one of the two existing ones and make it global. A complete rewrite would have eventually turned out to be a big-bang release with the risk of regressions being too high; not something sensible or safe to pursue. Picking one codebase over the other would have necessarily let down one of the two teams and caused the re-implementation of some missing features present in the other codebase. At that time, the UK project was in a better shape and new features were developed for the UK market first. The international project was a bit behind due to the extra complexity of supporting multiple countries and features being too market-specific. During that time, the company was also undergoing massive growth and with multiple functional teams having been created internally, there was an increasing need to move towards modularization. Therefore, we decided to gradually and strategically modularize parts of the mobile products and onboard them onto the other codebase in a controlled and safe way. In doing so, we took the opportunity to deeply refactor and, in the vast majority of the cases, rewrite parts in their entirety enabling new designs, better tests, higher code coverage, and - holistically - a fully Swift codebase. We knew that the best way to refactor and clean up the code was by following a bottom-up approach. We started with the foundations to solve small and well-defined problems - such as logging, tracking, theming - enabling the team to learn to think modular. We later moved to isolating big chunks of code into functional modules to be able to onboard them into the companion codebase and ship them on a phased rollout. We soon realized we needed a solid engine to handle run-time configurations and remote feature flagging to allow switching ON and OFF features as well as entire modules. As discussed in a previous article, we developed JustTweak to achieve this goal. At the end of the journey, the UK and the International projects would look very similar, sharing a number of customizable modules, and differing only in the orchestration layer in the apps. The Just Eat iOS apps are far bigger and more complex than they might look at first glance. Generically speaking, merging different codebases takes orders of magnitude longer than separating them, and for us, it was a process that took over 3 years, being possible thanks to unparalleled efforts of engineers brought to work together. Over this time, the whole team learned a lot, from the basics of developing code in isolation to how to scale a complex system. Holistic Design 🤘 The following diagram outlines the modular architecture in its entirety as it is at the time of writing this article (December 2019). We can appreciate a fair number of modules clustered by type and the different consumer apps. Modular iOS architecture - holistic design Whenever possible, we took the opportunity to abstract some modules having them in a state that allows open-sourcing the code. All of our open-source modules are licensed under Apache 2 and can be found at github.com/justeat. Apps Due to the history of Just Eat described above, we build different apps per country per brand from different codebases All the modularization work we did bottom-up brought us to a place where the apps differ only in the layer orchestrating the modules. With all the consumer-facing features been moved to the domain modules, there is very little code left in the apps. Domain Modules Domain modules contain features specific to an area of the product. As the diagram above shows, the sum of all those parts makes up the Just Eat apps. These modules are constantly modified and improved by our teams and updating the consumer apps to use newer versions is an explicit action. We don't particularly care about backward compatibility here since we are the sole consumers and it's common to break the public interface quite often if necessary. It might seem at first that domain modules should depend on some Core modules (e.g. APIClient) but doing so would complicate the dependency tree as we'll discuss further in the "Dependency Management" section of this article. Instead, we inject core modules' services, simply making them conformant to protocols defined in the domain module. In this way, we maintain a good abstraction and avoid tangling the dependency graph. Core & Shared modules The Core and Shared modules represent the foundations of our stack, things like: custom UI framework theming engine logging, tracking, and analytics libraries test utilities client for all the Just Eat APIs feature flagging and experimentation engine and so forth. These modules - which are sometimes also made open-source - should not change frequently due to their nature. Here backward compatibility is important and we deprecate old APIs when introducing new ones. Both apps and domain modules can have shared modules as dependencies, while core modules can only be used by the apps. Updating the backbone of a system requires the propagation of the changes up in the stack (with its maintenance costs) and for this reason, we try to keep the number of shared modules very limited. Structure of a module As we touched on in previous articles, one of our fundamental principles is "always strive to find solutions to problems that are scalable and hide complexity as much as possible". We are almost obsessed with making things as simple as they can be. When building a module, our root principle is: Every module should be well tested, maintainable, readable, easily pluggable, and reasonably documented. The order of the adjectives implies some sort of priority. First of all, the code must be unit tested, and in the case of domain modules, UI tests are required too. Without reasonable code coverage, no code is shipped to production. This is the first step to code maintainability, where maintainable code is intended as "code that is easy to modify or extend". Readability is down to reasonable design, naming convention, coding standards, formatting, and all that jazz. Every module exposes a Facade that is very succinct, usually no more than 200 lines long. This entry point is what makes a module easily pluggable. In our module blueprint, the bare minimum is a combination of a facade class, injected dependencies, and one or more configuration objects driving the behavior of the module (leveraging the underlying feature flagging system powered by JustTweak discussed in a previous article). The facade should be all a developer needs to know in order to consume a module without having to look at implementation details. Just to give you an idea, here is an excerpt from the generated public interface of the Account module (not including the protocols): public typealias PasswordManagementService = ForgottenPasswordServiceProtocol & ResetPasswordServiceProtocol public typealias AuthenticationService = LoginServiceProtocol & SignUpServiceProtocol & PasswordManagementService & RecaptchaServiceProtocol public typealias UserAccountService = AccountInfoServiceProtocol & ChangePasswordServiceProtocol & ForgottenPasswordServiceProtocol & AccountCreditServiceProtocol public class AccountModule { public init(settings: Settings, authenticationService: AuthenticationService, userAccountService: UserAccountService, socialLoginServices: [SocialLoginService], userInfoProvider: UserInfoProvider) public func startLogin(on viewController: UIViewController) -> FlowCoordinator public func startResetPassword(on viewController: UIViewController, token: Token) -> FlowCoordinator public func startAccountInfo(on navigationController: UINavigationController) -> FlowCoordinator public func startAccountCredit(on navigationController: UINavigationController) -> FlowCoordinator public func loginUsingSharedWebCredentials(handler: @escaping (LoginResult) -> Void) } Domain module public interface example (Account module) We believe code should be self-descriptive and we tend to put comments only on code that really deserves some explanation, very much embracing John Ousterhout's approach described in A Philosophy of Software Design. Documentation is mainly relegated to the README file and we treat every module as if it was an open-source project: the first thing consumers would look at is the README file, and so we make it as descriptive as possible. Overall design We generate all our modules using CocoaPods via $ pod lib create which creates the project with a standard template generating the Podfile, podspec, and demo app in a breeze. The podspec could specify additional dependencies (both third-party and Core modules) that the demo app's Podfile could specify core modules dependencies alongside the module itself which is treated as a development pod as per standard setup. The backbone of the module, which is the framework itself, encompasses both business logic and UI meaning that both source and asset files are part of it. In this way, the demo apps are very much lightweight and only showcase module features that are implemented in the framework. The following diagram should summarize it all. Design of a module with Podfile and podspec examples Demo Apps Every module comes with a demo app we give particular care to. Demo apps are treated as first-class citizens and the stakeholders are both engineers and product managers. They massively help to showcase the module features - especially those under development - vastly simplify collaboration across Engineering, Product, and Design, and force a good mock-based test-first approach. Following is a SpringBoard page showing our demo apps, very useful to individually showcase all the functionalities implemented over time, some of which might not surface in the final product to all users. Some features are behind experiments, some still in development, while others might have been retired but still present in the modules. Every demo app has a main menu to: access the features force a specific language toggle configuration flags via JustTweak customize mock data We show the example of the Account module demo app on the right. Domain modules demo apps Internal design It's worth noting that our root principle mentioned above does not include any reference to the internal architecture of a module and this is intentional. It's common for iOS teams in the industry to debate on which architecture to adopt across the entire codebase but the truth is that such debate aims to find an answer to a non-existing problem. With an increasing number of modules and engineers, it's fundamentally impossible to align on a single paradigm shared and agreed upon by everyone. Betting on a single architectural design would ultimately let down some engineers who would complain down the road that a different design would have played out better. We decided to stick with the following rule of thumb: Developers are free to use the architectural design they feel would work better for a given problem. This approach brought us to have a variety of different designs - spanning from simple old-school MVC, to a more evolved VIPER - and we constantly learn from each other's code. What's important at the end of the day is that techniques such as inversion of control, dependency injection, and more generally the SOLID principles, are used appropriately to embrace our root principle. Dependency Management We rely heavily on CocoaPods since we adopted it in the early days as it felt like the best and most mature choice at the time we started modularizing our codebase. We think this still holds at the time of writing this article but we can envision a shift to SPM (Swift Package Manager) in 1-2 years time. With a growing number of modules, comes the responsibility of managing the dependencies between them. No panacea can cure dependency hell, but one should adopt some tricks to keep the complexity of the stack under reasonable control. Here's a summary of what worked for us: Always respect semantic versioning; Keep the dependency graph as shallow as possible. From our apps to the leaves of the graph there are no more than 2 levels; Use a minimal amount of shared dependencies. Be aware that every extra level with shared modules brings in higher complexity; Reduce the number of third-party libraries to the bare minimum. Code that's not written and owned by your team is not under your control; Never make modules within a group (domain, core, shared) depend on other modules of the same group; Automate the publishing of new versions. When a pull request gets merged into the master branch, it must also contain a version change in the podspec. Our continuous integration system will automatically validate the podspec, publish it to our private spec repository, and in just a matter of minutes the new version becomes available; Fix the version for dependencies in the Podfile. Whether it is a consumer app or a demo app, we want both our modules and third-party libraries not to be updated unintentionally. It's acceptable to use the optimistic operator for third-party libraries to allow automatic updates of new patch versions; Fix the version for third-party libraries in the modules' podspec. This guarantees that modules' behavior won't change in the event of changes in external libraries. Failing to do so would allow defining different versions in the app's Podfile, potentially causing the module to not function correctly or even to not compile; Do not fix the version for shared modules in the modules' podspec. In this way, we let the apps define the version in the Podfile, which is particularly useful for modules that change often, avoiding the hassle of updating the version of the shared modules in every podspec referencing it. If a new version of a shared module is not backward compatible with the module consuming it, the failure would be reported by the continuous integration system as soon as a new pull request gets raised. A note on the Monorepo approach When it comes to dependency management it would be unfair not to mention the opinable monorepo approach. Monorepos have been discussed quite a lot by the community to pose a remedy to dependency management (de facto ignoring it), some engineers praise them, others are quite contrary. Facebook, Google, and Uber are just some of the big companies known to have adopted this technique, but in hindsight, it's still unclear if it was the best decision for them. In our opinion, monorepos can sometimes be a good choice. For example, in our case, a great benefit a monorepo would give us is the ability to prepare a single pull request for both implementing a code change in a module and integrating it into the apps. This will have an even greater impact when all the Just Eat consumer apps are globalized into a single codebase. Onwards and upwards Modularizing the iOS product has been a long journey and the learnings were immense. All in all, it took more than 3 years, from May 2016 to October 2019, always balancing tech and product improvements. Our natural next step is unifying the apps into a single global project, migrating the international countries over to the UK project to ultimately reach the utopian state of having a single global app. All the modules have been implemented in a fairly abstract way and following a white labeling approach, allowing us to extend support to new countries and onboard acquired companies in the easiest possible way.

Lessons learned from handling JWT on mobile
- iOS
- Authorization
- JWT
- Token
- mobile
Implementing Authorization on mobile can be tricky. Here are some recommendations to avoid common issues.
Originally published on the Just Eat Engineering Blog.
OverviewModern mobile apps are more complicated than they used to be back in the early days and developers have to face a variety of interesting problems.
Implementing Authorization on mobile can be tricky. Here are some recommendations to avoid common issues. Originally published on the Just Eat Engineering Blog. Overview Modern mobile apps are more complicated than they used to be back in the early days and developers have to face a variety of interesting problems. While we've put in our two cents on some of them in previous articles, this one is about authorization and what we have learned by handling JWT on mobile at Just Eat. When it comes to authorization, it's standard practice to rely on OAuth 2.0 and the companion JWT (JSON Web Token). We found this important topic was rarely discussed online while much attention was given to new proposed implementations of network stacks, maybe using recent language features or frameworks such as Combine. We'll illustrate the problems we faced at Just Eat for JWT parsing, usage, and (most importantly) refreshing. You should be able to learn a few things on how to make your app more stable by reducing the chance of unauthorized requests allowing your users to virtually always stay logged in. What is JWT JWT stands for JSON Web Token and is an open industry standard used to represent claims transferred between two parties. A signed JWT is known as a JWS (JSON Web Signature). In fact, a JWT has either to be JWS or JWE (JSON Web Encryption). RFC 7515, RFC 7516, and RFC 7519 describe the various fields and claims in detail. What is relevant for mobile developers is the following: JWT is composed of 3 parts dot-separated: Header, Payload, Signature. The Payload is the only relevant part. The Header identifies which algorithm is used to generate the signature. There are reasons for not verifying the signature client-side making the Signature part irrelevant too. JWT has an expiration date. Expired tokens should be renewed/refreshed. JWT can contain any number of extra information specific to your service. It's common practice to store JWTs in the app keychain. Here is a valid and very short token example, courtesy of jwt.io/ which we recommend using to easily decode tokens for debugging purposes. It shows 3 fragments (base64 encoded) concatenated with a dot. eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJleHAiOjE1Nzc3NTA0MDB9.7hgBhNK_ZpiteB3GtLh07KJ486Vfe3WAdS-XoDksJCQ The only field relevant to this document is exp (Expiration Time), part of Payload (the second fragment). This claim identifies the time after which the JWT must not be accepted. In order to accept a JWT, it's required that the current date/time must be before the expiration time listed in the exp claim. It's accepted practice for implementers to consider for some small leeway, usually no more than a few minutes, to account for clock skew. N.B. Some API calls might demand the user is logged in (user-authenticated calls), and others don't (non-user-authenticated calls). JWT can be used in both cases, marking a distinction between Client JWT and User JWT we will refer to later on. The token refresh problem By far the most significant problem we had in the past was the renewal of the token. This seems to be something taken for granted by the mobile community, but in reality, we found it to be quite a fragile part of the authentication flow. If not done right, it can easily cause your customers to end up being logged out, with the consequent frustration we all have experienced as app users. The Just Eat app makes multiple API calls at startup: it fetches the order history to check for in-flight orders, fetches the most up-to-date consumer details, etc. If the token is expired when the user runs the app, a nasty race condition could cause the same refresh token to be used twice, causing the server to respond with a 401 and subsequently logging the user out on the app. This can also happen during normal execution when multiple API calls are performed very close to each other and the token expires prior to those. It gets trickier if the client and the server clocks are sensibly off sync: while the client might believe to be in possession of a valid token, it has already expired. The following diagram should clarify the scenario. Common misbehavior I couldn't find a company (regardless of size) or indie developer who had implemented a reasonable token refresh mechanism. The common approach seems to be: to refresh the token whenever an API call fails with 401 Unauthorized. This is not only causing an extra call that could be avoided by locally checking if the token has expired, but it also opens the door for the race condition illustrated above. Avoid race conditions when refreshing the token 🚦 We'll explain the solution with some technical details and code snippets but what what's more important is that the reader understands the root problem we are solving and why it should be given the proper attention. The more we thought about it, we more we convinced ourselves that the best way to shield ourselves from race conditions is by using threading primitives when scheduling async requests to fetch a valid token. This means that all the calls would be regulated via a filter that would hold off subsequent calls to fire until a valid token is retrieved, either from local storage or, if a refresh is needed, from the remote OAuth server. We'll show examples for iOS, so we've chosen dispatch queues and semaphores (using GCD); fancier and more abstract ways of implementing the solution might exist - in particular by leveraging modern FRP techniques - but ultimately the same primitives are used. For simplicity, let's assume that only user-authenticated API requests need to provide a JWT, commonly put in the Authorization header: Authorization: Bearer <jwt-token> The code below implements the "Get valid JWT" box from the following flowchart. The logic within this section is the one that must be implemented in mutual exclusion, in our solution, by using the combination of a serial queue and a semaphore. Here is just the minimum amount of code (Swift) needed to explain the solution. typealias Token = String typealias AuthorizationValue = String struct UserAuthenticationInfo { let bearerToken: Token // the JWT let refreshToken: Token let expiryDate: Date // computed on creation from 'exp' claim var isValid: Bool { return expiryDate.compare(Date()) == .orderedDescending } } protocol TokenRefreshing { func refreshAccessToken(_ refreshToken: Token, completion: @escaping (Result<UserAuthenticationInfo, Error>) -> Void) } protocol AuthenticationInfoStorage { var userAuthenticationInfo: UserAuthenticationInfo? func persistUserAuthenticationInfo(_ authenticationInfo: UserAuthenticationInfo?) func wipeUserAuthenticationInfo() } class AuthorizationValueProvider { private let authenticationInfoStore: AuthenticationInfoStorage private let tokenRefreshAPI: TokenRefreshing private let queue = DispatchQueue(label: <#label#>, qos: .userInteractive) private let semaphore = DispatchSemaphore(value: 1) init(tokenRefreshAPI: TokenRefreshing, authenticationInfoStore: AuthenticationInfoStorage) { self.tokenRefreshAPI = tokenRefreshAPI self.authenticationInfoStore = authenticationInfoStore } func getValidUserAuthorization(completion: @escaping (Result<AuthorizationValue, Error>) -> Void) { queue.async { self.getValidUserAuthorizationInMutualExclusion(completion: completion) } } } Before performing any user-authenticated request, the network client asks an AuthorizationValueProvider instance to provide a valid user Authorization value (the JWT). It does so via the async method getValidUserAuthorization which uses a serial queue to handle the requests. The chunky part is the getValidUserAuthorizationInMutualExclusion. private func getValidUserAuthorizationInMutualExclusion(completion: @escaping (Result<AuthorizationValue, Error>) -> Void) { semaphore.wait() guard let authenticationInfo = authenticationInfoStore.userAuthenticationInfo else { semaphore.signal() let error = // forge an error for 'missing authorization' completion(.failure(error)) return } if authenticationInfo.isValid { semaphore.signal() completion(.success(authenticationInfo.bearerToken)) return } tokenRefreshAPI.refreshAccessToken(authenticationInfo.refreshToken) { result in switch result { case .success(let authenticationInfo): self.authenticationInfoStore.persistUserAuthenticationInfo(authenticationInfo) self.semaphore.signal() completion(.success(authenticationInfo.bearerToken)) case .failure(let error) where error.isClientError: self.authenticationInfoStore.wipeUserAuthenticationInfo() self.semaphore.signal() completion(.failure(error)) case .failure(let error): self.semaphore.signal() completion(.failure(error)) } } } The method could fire off an async call to refresh the token, and this makes the usage of the semaphore crucial. Without it, the next request to AuthorizationValueProvider would be popped from the queue and executed before the remote refresh completes. The semaphore is initialised with a value of 1, meaning that only one thread can access the critical section at a given time. We make sure to call wait at the beginning of the execution and to call signal only when we have a result and therefore ready to leave the critical section. If the token found in the local store is still valid, we simply return it, otherwise, it's time to request a new one. In the latter case, if all goes well, we persist the token locally and allow the next request to access the method, in the case of an error, we should be careful and wipe the token only if the error is a legit client error (2xx range). This includes also the usage of a refresh token that is not valid anymore, which could happen, for instance, if the user resets the password on another platform/device. It's critical to not delete the token from the local store in the case of any other error, such as 5xx or the common Foundation's NSURLErrorNotConnectedToInternet (-1009), or else the user would unexpectedly be logged out. It's also important to note that the same AuthorizationValueProvider instance must be used by all the calls: using different ones would mean using different queues making the entire solution ineffective. It seemed clear that the network client we developed in-house had to embrace JWT refresh logic at its core so that all the API calls, even new ones that will be added in the future would make use of the same authentication flow. General recommendations Here are a couple more (minor) suggestions we thought are worth sharing since they might save you implementation time or influence the design of your solution. Correctly parse the Payload Another problem - even though quite trivial and that doesn't seem to be discussed much - is the parsing of the JWT, that can fail in some cases. In our case, this was related to the base64 encoding function and "adjusting" the base64 payload to be parsed correctly. In some implementations of base64, the padding character is not needed for decoding, since the number of missing bytes can be calculated but in Foundation's implementation it is mandatory. This caused us some head-scratching and this StackOverflow answer helped us. The solution is - more officially - stated in RFC 7515 - Appendix C and here is the corresponding Swift code: func base64String(_ input: String) -> String { var base64 = input .replacingOccurrences(of: "-", with: "+") .replacingOccurrences(of: "_", with: "/") switch base64.count % 4 { case 2: base64 = base64.appending("==") case 3: base64 = base64.appending("=") default: break } return base64 } The majority of the developers rely on external libraries to ease the parsing of the token, but as we often do, we have implemented our solution from scratch, without relying on a third-party library. Nonetheless, we feel JSONWebToken by Kyle Fuller is a very good one and it seems to implement JWT faithfully to the RFC, clearly including the necessary base64 decode function. Handle multiple JWT for multiple app states As previously stated, when using JWT as an authentication method for non-user- authenticated calls, we need to cater for at least 3 states, shown in the following enum: enum AuthenticationStatus { case notAuthenticated case clientAuthenticated case userAuthenticated } On a fresh install, we can expect to be in the .notAuthenticated state, but as soon as the first API call is ready to be performed, a valid Client JWT has to be fetched and stored locally (at this stage, other authentication mechanisms are used, most likely Basic Auth), moving to the .clientAuthenticated state. Once the user completes the login or signup procedure, a User JWT is retrieved and stored locally (but separately to the Client JWT), entering the .userAuthenticated, so that in the case of a logout we are left with a (hopefully still valid) Client JWT. In this scenario, almost all transitions are possible: A couple of recommendations here: if the user is logged in is important to use the User JWT also for the non-user-authenticated calls as the server may personalise the response (e.g. the list of restaurants in the Just Eat app) store both Client and User JWT, so that if the user logs out, the app is left with the Client JWT ready to be used to perform non-user-authenticated requests, saving an unnecessary call to fetch a new token Conclusion In this article, we've shared some learnings from handling JWT on mobile that are not commonly discussed within the community. As a good practice, it's always best to hide complexity and implementation details. Baking the refresh logic described above within your API client is a great way to avoid developers having to deal with complex logic to provide authorization, and enables all the API calls to undergo the same authentication mechanism. Consumers of an API client, should not have the ability to gather the JWT as it’s not their concern to use it or to fiddle with it. We hope this article helps to raise awareness on how to better handle the usage of JWT on mobile applications, in particular making sure we always do our best to avoid accidental logouts to provide a better user experience.
A Smart Feature Flagging System for iOS
- iOS
- feature flags
- Optimizely
- Just Eat
At Just Eat we have experimentation and feature flagging at our heart and we've developed a component, named JustTweak, to make things easier on iOS.
How the iOS team at Just Eat built a scalable open-source solution to handle local and remote flags. Originally published on the Just Eat Engineering Blog. Overview At Just Eat we have experimentation at our heart, and it is very much dependent on feature flagging/toggling. If we may be so bold, here's an analogy: feature flagging is to experimentation as machine learning is to AI, you cannot have the second without the first one. We've developed an in-house component, named JustTweak, to handle feature flags and experiments on iOS without the hassle. We open-sourced JustTweak on github.com in 2017 and we have been evolving it ever since; in particular, with support for major experimentation platforms such as Optimizely and Firebase Remote Config. JustTweak has been instrumental in evolving the consumer Just Eat app in a fast and controlled manner, as well as to support a large number of integrations and migrations happening under the hood. In this article, we describe the feature flagging architecture and engine, with code samples and integration suggestions. What is feature flagging Feature flagging, in its original form, is a software development technique that provides an alternative to maintaining multiple source-code branches, so that a feature can be tested even before it is completed and ready for release. Feature flags are used in code to show/hide or enable/disable specific features at runtime. The technique also allows developers to release a version of a product that has unfinished features, that can be hidden from the user. Feature toggles also allow shorter software integration cycles and small incremental versions of software to be delivered without the cost of constant branching and merging - needless to say, this is crucial to have on iOS due to the App Store review process not allowing continuous delivery. A boolean flag in code is used to drive what code branch will run, but the concept can easily be extended to non-boolean flags, making them more of configuration flags that drive behavior. As an example, at Just Eat we have been gradually rewriting the whole application over time, swapping and customizing entire modules via configuration flags, allowing gradual switches from old to new features in a way transparent to the user. Throughout this article, the term 'tweaks' is used to refer to feature/configuration flags. A tweak can have a value of different raw types, namely Bool, String, Int, Float, and Double. Boolean tweaks can be used to drive features, like so: let isFeatureXEnabled: Bool = ... if isFeatureXEnabled { // show feature X } else { // don't show feature X } Other types of tweaks are instead useful to customise a given feature. Here is an example of configuring the environment using tweaks: let publicApiHost: String = ... let publicApiPort: Int? = ... let endpoint = Endpoint(scheme: "https", host: publicApiHost, port: publicApiPort, path: "/restaurant/:id/menu") // perform a request using the above endpoint object Problem The crucial part to get right is how and from where the flag values (isFeatureXEnabled, publicApiHost, and publicApiPort in the examples above) are fetched. Every major feature flagging/experimentation platform in the market provides its own way to fetch the values, and sometimes the APIs to do so significantly differ (e.g. Firebase Remote Config Vs Optimizely). Aware of the fact that it’s increasingly difficult to build any kind of non-trivial app without leveraging external dependencies, it's important to bear in mind that external dependencies pose a great threat to the long term stability and viability of any application. Following are some issues related to third-party experimentation solutions: third-party SDKs are not under your control using third-party SDKs in a modular architected app would easily cause dependency hell third-party SDKs are easily abused and various areas of your code will become entangled with them your company might decide to move to a different solution in the future and such switch comes with costs depending on the adopted solution, you might end up tying your app more and more to the platform-specific features that don't find correspondence elsewhere it is very hard to support multiple feature flag providers For the above reasons, it is best to hide third-party SDKs behind some sort of a layer and to implement an orchestration mechanism to allow fetching of flag values from different providers. We'll describe how we've achieved this in JustTweak. A note on the approach When designing software solutions, a clear trait was identified over time in the iOS team, which boils down to the kind of mindset and principle been used: Always strive to find solutions to problems that are scalable and hide complexity as much as possible. One word you would often hear if you were to work in the iOS team is 'Facade', which is a design pattern that serves as a front-facing interface masking more complex underlying or structural code. Facades are all over the place in our code: we try to keep components' interfaces as simple as possible so that other engineers could utilize them with minimal effort without necessarily knowing the implementation details. Furthermore, the more succinct an interface is, the rarer the possibility of misusages would be. We have some open source components embracing this approach, such as JustPersist, JustLog, and JustTrack. JustTweak makes no exception and the code to integrate it successfully in a project is minimal. Sticking to the above principle, the idea behind JustTweak is to have a single entry point to gather flag values, hiding the implementation details regarding which source the flag values are gathered from. JustTweak to the rescue JustTweak provides a simple facade interface interacting with multiple configurations that are queried respecting a certain priority. Configurations wrap specific sources of tweaks, that are then used to drive decisions or configurations in the client code. You can find JustTweak on CocoaPods and it's on version 5.0.0 at the time of writing. We plan to add support for Carthage and Swift Package Manager in the future. A demo app is also available for you to try it out. With JustTweak you can achieve the following: use a JSON local configuration providing default tweak values use a number of remote configuration providers, such as Firebase and Optmizely, to run A/B tests and feature flagging enable, disable, and customize features locally at runtime provide a dedicated UI for customization (this comes particularly handy for features that are under development to showcase the progress to stakeholders) Here is a screenshot of the TweakViewController taken from the demo app. Tweak values changed via this screen are immediately available to your code at runtime. Stack setup The facade class previously mentioned is represented by the TweakManager. There should only be a single instance of the manager, ideally configured at startup, passed around via dependency injection, and kept alive for the whole lifespan of the app. Following is an example of the kind of stack implemented as a static let. static let tweakManager: TweakManager = { // mutable configuration (to override tweaks from other configurations) let userDefaultsConfiguration = UserDefaultsConfiguration(userDefaults: .standard) // remote configurations (optional) let optimizelyConfiguration = OptimizelyConfiguration() let firebaseConfiguration = FirebaseConfiguration() // local JSON configuration (default tweaks) let jsonFileURL = Bundle.main.url(forResource: "Tweaks", withExtension: "json")! let localConfiguration = LocalConfiguration(jsonURL: jsonFileURL) // priority is defined by the order in the configurations array // (from highest to lowest) let configurations: [Configuration] = [userDefaultsConfiguration, optimizelyConfiguration, firebaseConfiguration, localConfiguration] return TweakManager(configurations: configurations) }() ``` JustTweak comes with three configurations out-of-the-box: UserDefaultsConfiguration which is mutable and uses UserDefaults as a key/value store LocalConfiguration which is read-only and uses a JSON configuration file that is meant to be the default configuration EphemeralConfiguration which is simply an instance of NSMutableDictionary Besides, JustTweak defines Configuration and MutableConfiguration protocols you can implement to create your own configurations to fit your needs. In the example project, you can find a few example configurations which you can use as a starting point. You can have any source of flags via wrapping it in a concrete implementation of the above protocols. Since the protocol methods are synchronous, you'll have to make sure that the underlying source has been initialised as soon as possible at startup. All the experimentation platforms provide mechanisms to do so, for example here is how Optimizely does it. The order of the objects in the configurations array defines the configurations' priority. The MutableConfiguration with the highest priority, such as UserDefaultsConfiguration in the example above, will be used to reflect the changes made in the UI (TweakViewController). The LocalConfiguration should have the lowest priority as it provides the default values from a local JSON file. It's also the one used by the TweakViewController to populate the UI. When fetching a tweak, the engine will inspect the chain of configurations in order and pick the tweak from the first configuration having it. The following diagram outlines a possible setup where values present in Optimizely override others in the subsequent configurations. Eventually, if no override is found, the local configuration would return the default tweak baked in the app. Structuring the stack this way brings various advantages: the same engine is used to customise the app for development, production, and test runs consumers only interface with the facade and can ignore the implementation details new code put behind flags can be shipped with confidence since we rely on a tested engine ability to remotely override tweaks de facto allowing to greatly customise the app without the need for a new release TweakManager gets populated with the tweaks listed in the JSON file used as backing store of the LocalConfiguration instance. It is therefore important to list every supported tweak in there so that development builds of the app can allow tweaking the values. Here is an excerpt from the file used in the TweakViewController screenshot above. { "ui_customization": { "display_red_view": { "Title": "Display Red View", "Description": "shows a red view in the main view controller", "Group": "UI Customization", "Value": false }, ... "red_view_alpha_component": { "Title": "Red View Alpha Component", "Description": "defines the alpha level of the red view", "Group": "UI Customization", "Value": 1.0 }, "label_text": { "Title": "Label Text", "Description": "the title of the main label", "Group": "UI Customization", "Value": "Test value" } }, "general": { "greet_on_app_did_become_active": { "Title": "Greet on app launch", "Description": "shows an alert on applicationDidBecomeActive", "Group": "General", "Value": false }, ... } } Testing considerations We've seen that the described architecture allows customization via configurations. We've shown in the above diagram that JustTweak can come handy when used in conjunction with our AutomationTools framework too, which is open-source. An Ephemeral configuration would define the app environment at run-time greatly simplifying the implementation of UI tests, which is well-known to be a tedious activity. Usage The two main features of JustTweak can be accessed from the TweakManager. Checking if a feature is enabled // check for a feature to be enabled let isFeatureXEnabled = tweakManager.isFeatureEnabled("feature_X") if isFeatureXEnabled { // show feature X } else { // hide feature X } Getting and setting the value of a flag for a given feature/variable. JustTweak will return the value from the configuration with the highest priority that provides it, or nil if none of the configurations have that feature/variable. // check for a tweak value let tweak = tweakManager.tweakWith(feature: <#feature_key#>, variable: <#variable_key#>") if let tweak = tweak { // tweak was found in some configuration, use tweak.value } else { // tweak was not found in any configuration } The Configuration and MutableConfiguration protocols define the following methods: func tweakWith(feature: String, variable: String) -> Tweak? func set(_ value: TweakValue, feature: String, variable: String) func deleteValue(feature: String, variable: String) You might wonder why is there a distinction between feature and variable. The reason is that we want to support the Optimizely lingo for features and related variables and therefore the design of JustTweak has to necessarily reflect that. Other experimentation platforms (such as Firebase) have a single parameter key, but we had to harmonise for the most flexible platform we support. Property Wrappers With SE-0258, Swift 5.1 introduces Property Wrappers. If you haven't read about them, we suggest you watch the WWDC 2019 "Modern Swift API Design talk where Property Wrappers are explained starting at 23:11. In short, a property wrapper is a generic data structure that encapsulates read/write access to a property while adding some extra behavior to augment its semantics. Common examples are @AtomicWrite and @UserDefault but more creative usages are up for grabs and we couldn't help but think of how handy it would be to have property wrappers for feature flags, and so we implemented them. @TweakProperty and @OptionalTweakProperty are available to mark properties representing feature flags. Here are a couple of examples, making the code so much nicer than before. @TweakProperty(fallbackValue: <#default_value#>, feature: <#feature_key#>, variable: <#variable_key#>, tweakManager: tweakManager) var isFeatureXEnabled: Bool @TweakProperty(fallbackValue: <#default_value#>, feature: <#feature_key#>, variable: <#variable_key#>, tweakManager: tweakManager) var publicApiHost: String @OptionalTweakProperty(fallbackValue: <#default_value_or_nil#>, feature: <#feature_key#>, variable: <#variable_key#>, tweakManager: tweakManager) var publicApiPort: Int? Mind that by using these property wrappers, a static instance of TweakManager must be available. Update a configuration at runtime JustTweak comes with a ViewController that allows the user to edit the tweaks while running the app. That is achieved by using the MutableConfiguration with the highest priority from the configurations array. This is de facto a debug menu, useful for development and internal builds but not to include in release builds. #if DEBUG func presentTweakViewController() { let tweakViewController = TweakViewController(style: .grouped, tweakManager: tweakManager) // either present it modally or push it on a UINavigationController } #endif Additionally, when a value is modified in any MutableConfiguration, a notification is fired to give the clients the opportunity to react and reflect changes in the UI. override func viewDidLoad() { super.viewDidLoad() NotificationCenter.defaultCenter().addObserver(self, selector: #selector(updateUI), name: TweakConfigurationDidChangeNotification, object: nil) } @objc func updateUI() { // update the UI accordingly } A note on modular architecture It's reasonable to assume that any non-trivial application approaching 2020 is composed of a number of modules and our Just Eat iOS app surely is too. With more than 30 modules developed in-house, it's crucial to find a way to inject flags into the modules but also to avoid every module to depend on an external library such as JustTweak. One way to achieve this would be: define one or more protocols in the module with the set of properties desired structure the modules to allow dependency injection of objects conforming to the above protocol implement logic in the module to consume the injected objects For instance, you could have a class wrapping the manager like so: protocol ModuleASettings { var isFeatureXEnabled: Bool { get } } protocol ModuleBSettings { var publicApiHost: String { get } var publicApiPort: Int? { get } } import JustTweak public class AppConfiguration: ModuleASettings, ModuleBSettings { static let tweakManager: TweakManager = { ... } @TweakProperty(...) var isFeatureXEnabled: Bool @TweakProperty(...) var publicApiHost: String @OptionalTweakProperty(...) var publicApiPort: Int? } Future evolution With recent versions of Swift and especially with 5.1, developers have a large set of powerful new tools, such as generics, associated types, opaque types, type erasure, etc. With Combine and SwiftUI entering the scene, developers are also starting adopting new paradigms to write code. Sensible paths to evolve JustTweak could be to have the Tweak object be generic on TweakValue have TweakManager be an ObservableObject which will enable publishing of events via Combine, and use @EnvironmentObject to ease the dependency injection in the SwiftUI view hierarchy. While such changes will need time to be introduced since our contribution to JustTweak is in-line with the evolution of the Just Eat app (and therefore a gradual adoption of SwiftUI), we can't wait to see them implemented. If you desire to contribute, we are more than happy to receive pull requests. Conclusion In this article, we illustrated how JustTweak can be of great help in adding flexible support to feature flagging. Integrations with external providers/experimentation platforms such as Optimizely, allow remote override of flags without the need of building a new version of the app, while the UI provided by the framework allows local overrides in development builds. We've shown how to integrate JustTweak in a project, how to setup a reasonable stack with a number of configurations and we’ve given you some guidance on how to leverage it when writing UI tests. We believe JustTweak to be a great tool with no similar open source alternatives nor proprietary ones and we hope developers will adopt it more and more.
Deep Linking at Scale on iOS
- deep links
- deep linking
- universal links
- iOS
- navigation
- flow controllers
- state machine
- futures
- promises
- Just Eat
How the iOS team at Just Eat built a scalable architecture to support navigation and deep linking.
Originally published on the Just Eat Engineering Blog.
In this article, we propose an architecture to implement a scalable solution to Deep Linking on iOS using an underlying Flow Controller-based architecture, all powered
How the iOS team at Just Eat built a scalable architecture to support navigation and deep linking. Originally published on the Just Eat Engineering Blog. In this article, we propose an architecture to implement a scalable solution to Deep Linking on iOS using an underlying Flow Controller-based architecture, all powered by a state machine and the Futures & Promises paradigm to keep the code more readable. At Just Eat, we use a dedicated component named NavigationEngine that is domain-specific to the Just Eat apps and their use cases. A demo project named NavigationEngineDemo that includes the NavigationEngine architecture (stripped out of many details not necessary to showcase the solution) is available on GitHub. Overview Deep linking is one of the most underestimated problems to solve on mobile. A naïve explanation would say that given some sort of input, mobile apps can load a specific screen, but it only has practical meaning when combined with Universal Links on iOS and App Links on Android. In such cases, the input is a URL that would load a web page on the companion website. Let's use an example from Just Eat: opening the URL https://www.just-eat.co.uk/area/ec4m-london on a web browser would load the list of restaurants in the UK London area for the postcode EC4M. Deep linking to the mobile apps using the same URL should give a similar experience to the user. In reality, the problem is more complex than what it seems at first glance; non-tech people - and sometimes even developers - find it hard to grasp. Loading a web page in a browser is fundamentally different from implementing dedicated logic on mobile to show a UIViewController (iOS) or Activity (Android) to the user and populate it with information that will most likely be gathered from an API call. The logic to perform deep linking starts with parsing the URL, understanding the intent, constructing the user journey, performing the navigation to the target screen passing the info all the way down, and ultimately loading any required data asynchronously from a remote API. On top of all this, it also has to consider the state of the app: the user might have previously left the app in a particular state and dedicated logic would be needed to deep link from the existing to the target screen. A scenario to consider is when the user is not logged in and therefore some sections of the app may not be available. Deep linking can actually be triggered from a variety of sources: Safari web browser any app that allows tapping on a link (iMessage, Notes, etc.) any app that explicitly tries to open the app using custom URL schemes the app itself (to perform jumps between sections) TodayExtension Shortcut items (Home Screen Quick Actions) Spotlight items It should be evident that implementing a comprehensive and scalable solution that fully addresses deep linking is far from being trivial. It shouldn't be an after-thought but rather be baked into the app architecture from the initial app design. It should also be quite glaring what the main problem that needs to be solved first is: the app Navigation. Navigation itself is not a problem with a single solution (if it was, the solution would be provided by Apple/Google and developers would simply stick to it). A number of solutions were proposed over the years trying to make it simpler and generic to some degree - Router, Compass, XCoordinator to name just a few open-source components. I proposed the concept of Flow Controllers in my article Flow Controllers on iOS for a better navigation control back in 2014 when the community had already (I believe) started shifting towards similar approaches. Articles such as Improve your iOS Architecture with FlowControllers (by Krzysztof Zabłocki), A Better MVC, Part 2: Fixing Encapsulation (by Dave DeLong), Flow Coordinators in iOS (by Dennis Walsh), and even as recently as 2019, Navigation with Flow Controllers (by Majid Jabrayilov) was published. To me, all the proposals share one main common denominator: flow controllers/coordinator and their API are necessarily domain-specific. Consider the following methods taken from one of the articles mentioned above referring to specific use cases: func showLoginViewController() { ... } func showSignupViewController() { ... } func showPasswordViewController() { ... } With the support of colleagues and friends, I tried proposing a generic and abstract solution but ultimately hit a wall. Attempts were proposed using enums to list the supported transitions (as XCoordinator shows in its README for instance) or relying on meta-programming dark magic in Objective-C (which is definitely the sign of a terrible design), neither of which satisfied me in terms of reusability and abstraction. I ultimately realized that it's perfectly normal for such problem to be domain-specific and that we don't necessarily have to find abstract solutions to all problems. Terminology For clarity on some of the terminology used in this article. Deep Linking: the ability to reach specific screens (via a flow) in the app either via a Deep Link or a Universal Link. Deep Link: URI with custom scheme (e.g. just-eat://just-eat.co.uk/login, just-eat-dk://just-eat.co.uk/settings) containing the information to perform deep linking in the app. When it comes to deep links, the host is irrelevant but it's good to keep it as part of the URL since it makes it easier to construct the URL using URLComponents and it keeps things more 'standard'. Universal Link: URI with http/https scheme (e.g. https://just-eat.co.uk/login) containing the information to perform deep linking in the app. Intent: the abstract intent of reaching a specific area of the app. E.g. goToOrderDetails(OrderId). State machine transition: transitions in the state machine allow navigating to a specific area in the app (state) from another one. If the app is in a state where the deep linking to a specific screen should not be allowed, the underlying state machine should not have the corresponding transition. Solution NavigationEngine is the iOS module (pod) used by the teams at Just Eat, that holds the isolated logic for navigation and deep linking. As mentioned above, the magic sauce includes the usage of: FlowControllers to handle the transitions between ViewControllers in a clear and pre-defined way. Stateful state machines to allow transitions according to the current application state. More information on FSM (Finite State Machine) here and on the library at The easiest State Machine in Swift. Promis to keep the code readable using Futures & Promises to help avoiding the Pyramid of doom. Sticking to such a paradigm is also a key aspect for the whole design since every API in the stack is async. More info on the library at The easiest Promises in Swift. a pretty heavy amount of 🧠 NavigationEngine maintains separation of concerns between URL Parsing, Navigation, and Deep Linking. Readers can inspect the code in the NavigationEngineDemo project that also includes unit tests with virtually 100% code coverage. Following is an overview of the class diagram of the entire architecture stack. Architecture class diagram While the navigation is powered by a FlowController-based architecture, the deep linking logic is powered by NavigationIntentHandler and NavigationTransitioner (on top of the navigation stack). Note the single entry point named DeepLinkingFacade exposes the following API to cover the various input/sources we mentioned earlier: public func handleURL(_ url: URL) -> Future<Bool> public func openDeepLink(_ deepLink: DeepLink) -> Future<Bool> public func openShortcutItem(_ item: UIApplicationShortcutItem) -> Future<Bool> public func openSpotlightItem(_ userActivity: NSUserActivityProtocol) -> Future<Bool> Here are the sequence diagrams for each one. Refer to the demo project to inspect the code. Navigation As mentioned earlier, the important concept to grasp is that there is simply no single solution to Navigation. I've noticed that such a topic quickly raises discussions and each engineer has different, sometimes strong opinions. It's more important to agree on a working solution that satisfies the given requirements rather than forcing personal preferences. Our NavigationEngine relies on the following navigation rules (based on Flow Controllers): FlowControllers wire up the domain-specific logic for the navigation ViewControllers don't allocate FlowControllers Only FlowControllers, AppDelegate and similar top-level objects can allocate ViewControllers FlowControllers are owned (retained) by the creators FlowControllers can have children FlowControllers and create a parent-child chain and can, therefore, be in a 1-to-many relationship FlowControllers in parent-child relationships communicate via delegation ViewControllers have weak references to FlowControllers ViewControllers are in a 1-to-1 relationship with FlowControllers All the FlowController domain-specific API must be future-based with Future<Bool> as return type Deep linking navigation should occur with no more than one animation (i.e. for long journeys, only the last step should be animated) Deep linking navigation that pops a stack should occur without animation In the demo project, there are a number of *FlowControllerProtocols, each corresponding to a different section/domain of the hosting app. Examples such as RestaurantsFlowControllerProtocol and OrdersFlowControllerProtocol are taken from the Just Eat app and each one has domain specific APIs, e.g: func goToSearchAnimated(postcode: Postcode?, cuisine: Cuisine?, animated: Bool) -> Future<Bool> func goToOrder(orderId: OrderId, animated: Bool) -> Future<Bool> func goToRestaurant(restaurantId: RestaurantId) -> Future<Bool> func goToCheckout(animated: Bool) -> Future<Bool> Note that each one: accepts the animated parameter returns Future<Bool> so that flow sequence can be combined Flow controllers should be combined sensibly to represent the app UI structure. In the case of Just Eat we have a RootFlowController as the root-level flow controller orchestrating the children. A FlowControllerProvider, used by the NavigationTransitioner, is instead the single entry point to access the entire tree of flow controllers. NavigationTransitioner provides an API such as: func goToLogin(animated: Bool) -> Future<Bool> func goFromHomeToSearch(postcode: Postcode?, cuisine: Cuisine?, animated: Bool) -> Future<Bool> This is responsible to keep the underlying state machine and what the app actually shows in sync. Note the goFromHomeToSearch method being verbose on purpose; it takes care of the specific transition from a given state (home). One level up in the stack, NavigationIntentHandler is responsible for combining the actions available from the NavigationTransitioner starting from a given NavigationIntent and creating a complete deep linking journey. It also takes into account the current state of the app. For example, showing the history of the orders should be allowed only if the user is logged in, but it would also be advisable to prompt the user to log in in case he/she is not, and then resume the original action. Allowing so provides a superior user experience rather than simply aborting the flow (it's what websites achieve by using the referring URL). Here is the implementation of the .goToOrderHistory intent in the NavigationIntentHandler: case .goToOrderHistory: switch userStatusProvider.userStatus { case .loggedIn: return navigationTransitioner.goToRoot(animated: false).thenWithResult { _ -> Future<Bool> in self.navigationTransitioner.goToOrderHistory(animated: true) } case .loggedOut: return navigationTransitioner.requestUserToLogin().then { future in switch future.state { case .result: return self.handleIntent(intent) // go recursive default: return Future<Bool>.futureWithResolution(of: future) } } } Since in the design we make the entire API future-based, we can potentially interrupt the deep linking flow to prompt the user for details or simply gather missing information from a remote API. This is crucial and allows us to construct complex flows. By design, all journeys start by resetting the state of the app by calling goToRoot. This vastly reduces the number of possible transitions to take care of as we will describe in more detail in the next section dedicated to the underlying state machine. State Machine As you might have realized by now, the proposed architecture makes use of an underlying Finite State Machine to keep track of the state of the app during a deep linking journey. Here is a simplified version of the state machine configurations used in the Just Eat iOS apps. In the picture, the red arrows are transitions that are available for logged in users only, the blue ones are for logged out users only, while the black ones can always be performed. Note that every state should allow going back to the .allPoppedToRoot state so that, regardless of what the current state of the app is, we can always reset the state and perform a deep linking action starting afresh. This drastically simplifies the graph, avoiding unnecessary transitions such as the one shown in the next picture. Notice that intents (NavigationIntent) are different from transitions (NavigationEngine.StateMachine.EventType). An intent contains the information to perform a deep linking journey, while the event type is the transition from one FSM state to another (or the same). NavigationTransitioner is the class that performs the transitions and applies the companion navigation changes. A navigation step is performed only if the corresponding transition is allowed and completed successfully. If a transition is not allowed, the flow is interrupted, reporting an error in the future. You can showcase a failure in the demo app by trying to follow the Login Universal Link (https://just-eat.co.uk/login) after having faked the login when following the Order History Universal Link (https://just-eat.co.uk/orders). Usage NavigationEngineDemo includes the whole stack that readers can use in client projects. Here are the steps for a generic integration of the code. Add the NavigationEngine stack (NavigationEngineDemo/NavigationEngine folder) to the client project. This can be done by either creating a dedicated pod as we do at Just Eat or by directly including the code. Include Promis and Stateful as dependencies in your Podfile (assuming the usage of Cocoapods). Modify according to your needs, implement classes for all the *FlowControllerProtocols, and connect them to the ViewControllers of the client. This step can be quite tedious depending on the status of your app and we suggest trying to mimic what has been done in the demo app. Add CFBundleTypeRole and CFBundleURLSchemes to the main target Info.plist file to support Deep Links. E.g. <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>je-internal</string> <string>justeat</string> <string>just-eat</string> <string>just-eat-uk</string> </array> </dict> </array> Add the applinks (in the Capabilities -> Associated Domains section of the main target) you'd like to support. This will allow iOS to register the app for Universal Links on the given domains looking for the apple-app-site-association file at the root of those domains once the app is installed. E.g. Implement concrete classes for DeepLinkingSettingsProtocol and UserStatusProviding according to your needs. Again, see the examples in the demo project. The internalDeepLinkSchemes property in DeepLinkSettingsProtocol should contain the same values previously added to CFBundleURLSchemes, while the universalLinkHosts should contain the same applinks: values defined in Capabilities -> Associated Domains. Setup the NavigationEngine stack in the AppDelegate's applicationDidFinishLaunching. To some degree, it should be something similar to the following: var window: UIWindow? var rootFlowController: RootFlowController! var deepLinkingFacade: DeepLinkingFacade! var userStatusProvider = UserStatusProvider() let deepLinkingSettings = DeepLinkingSettings() func applicationDidFinishLaunching(_ application: UIApplication) { // Init UI Stack let window = UIWindow(frame: UIScreen.main.bounds) let tabBarController = TabBarController.instantiate() // Root Flow Controller rootFlowController = RootFlowController(with: tabBarController) tabBarController.flowController = rootFlowController // Deep Linking core let flowControllerProvider = FlowControllerProvider(rootFlowController: rootFlowController) deepLinkingFacade = DeepLinkingFacade(flowControllerProvider: flowControllerProvider, navigationTransitionerDataSource: self, settings: deepLinkingSettings, userStatusProvider: userStatusProvider) // Complete UI Stack window.rootViewController = tabBarController window.makeKeyAndVisible() self.window = window } Modify NavigationTransitionerDataSource according to your needs and implement its methods. You might want to have a separate component and not using the AppDelegate. extension AppDelegate: NavigationTransitionerDataSource { func navigationTransitionerDidRequestUserToLogin() -> Future<Bool> { <#async logic#> } ... } Implement the entry points for handling incoming URLs/inputs in the AppDelegate: func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool { // from internal deep links & TodayExtension deepLinkingFacade.openDeeplink(url).finally { future in <#...#> } return true } func application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([UIUserActivityRestoring]?) -> Void) -> Bool { switch userActivity.activityType { // from Safari case NSUserActivityTypeBrowsingWeb: if let webpageURL = userActivity.webpageURL { self.deepLinkingFacade.handleURL(webpageURL).finally { future in <#...#> } return true } return false // from Spotlight case CSSearchableItemActionType: self.deepLinkingFacade.openSpotlightItem(userActivity).finally { future in let originalInput = userActivity.userInfo![CSSearchableItemActivityIdentifier] as! String <#...#> } return true default: return false } } func application(_ application: UIApplication, performActionFor shortcutItem: UIApplicationShortcutItem, completionHandler: @escaping (Bool) -> Void) { // from shortcut items (Home Screen Quick Actions) deepLinkingFacade.openShortcutItem(shortcutItem).finally { future in let originalInput = shortcutItem.type <#...#> completionHandler(future.hasResult()) } } N.B. Since a number of tasks are usually performed at startup (both from cold and warm starts), it's suggested to schedule them using operation queues. The deep linking task should be one of the last tasks in the queue to make sure that dependencies are previously set up. Here is the great Advanced NSOperations talk by Dave DeLong from WWDC15. The UniversalLinkConverter class should be modified to match the paths in the apple-app-site-association, which should be reachable at the root of the website (the associated domain). It should be noted that if the app is opened instead of the browser, it would be because the Universal Link can be handled; and redirecting the user back to the web would be a fundamental mistake that should be solved by correctly defining the supported paths in the apple-app-site-association file. To perform internal app navigation via deep linking, the DeeplinkFactory class should be used to create DeepLink objects that can be fed into either handleURL(_ url: URL) or openDeepLink(_ deepLink: DeepLink). In-app testing The module exposes a DeepLinkingTesterViewController that can be used to easily test deep linking within an app. Simply define a JSON file containing the Universal Links and Deep Links to test: { "universal_links": [ "https://just-eat.co.uk/", "https://just-eat.co.uk/home", "https://just-eat.co.uk/login", ... ], "deep_links": [ "JUSTEAT://irrelev.ant/home", "justeat://irrelev.ant/login", "just-eat://irrelev.ant/resetPassword?resetToken=xyz", ... ] } Then feed it to the view controller as shown below. Alternatively, use a storyboard reference as shown in the demo app. let deepLinkingTesterViewController = DeepLinkingTesterViewController.instantiate() deepLinkingTesterViewController.delegate = self let path = Bundle.main.path(forResource: "deeplinking_test_list", ofType: "json")! deepLinkingTesterViewController.loadTestLinks(atPath: path) and implement the DeepLinkingTesterViewControllerDelegate extension AppDelegate: DeepLinkingTesterViewControllerDelegate { func deepLinkingTesterViewController(_ deepLinkingTesterViewController: DeepLinkingTesterViewController, didSelect url: URL) { self.deepLinkingFacade.handleURL(universalLink).finally { future in self.handleFuture(future, originalInput: universalLink.absoluteString) } } } Conclusion The solution proposed in this article has proven to be highly scalable and customizable. We shipped it in the Just Eat iOS apps in March 2019 and our teams are gradually increasing the number of Universal Links supported as you can see from our apple-app-site-association. Before implementing and adopting NavigationEngine, supporting new kinds of links was a real hassle. Thanks to this architecture, it is now easy for each team in the company to support new deep link journeys. The declarative approach in defining the API, states, transitions, and intents forces a single way to extend the code which enables a coherent approach throughout the codebase.

Articles for people who make web sites.
From Beta to Bedrock: Build Products that Stick.
As a product builder over too many years to mention, I've lost count of the number of times I've seen promising ideas go from zero to hero in a few weeks, only to fizzle out within months.
Financial products, which is the field I work in, are no exception. With people’s real hard-earned money on the line, user expectations running high, and a crowded market, it's tempting to throw as many features at the wall as possible and hope something sticks. But this approach is a recipe for disaster. Here's why:
The pitfalls of feature-first developmentWhen you start building a financial product from the ground up, or are migrating existing customer journeys from paper or telephony channels onto online banking or mobile apps, it's easy to get caught up in the excitement of creating new features. You might think, "If I can just add one more thing that solves this particular user problem, they'll love me!" But what happens when you inevitably hit a roadblock because the narcs (your security team!) don’t like it? When a hard-fought feature isn't as popular as you thought, or it breaks due to unforeseen complexity?
This is where the concept of Minimum Viable Product (MVP) comes in. Jason Fried's book Getting Real and his podcast Rework often touch on this idea, even if he doesn’t always call it that. An MVP is a product that provides just enough value to your users to keep them engaged, but not so much that it becomes overwhelming or difficult to maintain. It sounds like an easy concept but it requires a razor sharp eye, a ruthless edge and having the courage to stick by your opinion because it is easy to be seduced by “the Columbo Effect”… when there’s always “just one more thing…” that someone wants to add.
The problem with most finance apps, however, is that they often become a reflection of the internal politics of the business rather than an experience solely designed around the customer. This means that the focus is on delivering as many features and functionalities as possible to satisfy the needs and desires of competing internal departments, rather than providing a clear value proposition that is focused on what the people out there in the real world want. As a result, these products can very easily bloat to become a mixed bag of confusing, unrelated and ultimately unlovable customer experiences—a feature salad, you might say.
The importance of bedrockSo what's a better approach? How can we build products that are stable, user-friendly, and—most importantly—stick?
That's where the concept of "bedrock" comes in. Bedrock is the core element of your product that truly matters to users. It's the fundamental building block that provides value and stays relevant over time.
In the world of retail banking, which is where I work, the bedrock has got to be in and around the regular servicing journeys. People open their current account once in a blue moon but they look at it every day. They sign up for a credit card every year or two, but they check their balance and pay their bill at least once a month.
Identifying the core tasks that people want to do and then relentlessly striving to make them easy to do, dependable, and trustworthy is where the gravy’s at.
But how do you get to bedrock? By focusing on the "MVP" approach, prioritizing simplicity, and iterating towards a clear value proposition. This means cutting out unnecessary features and focusing on delivering real value to your users.
It also means having some guts, because your colleagues might not always instantly share your vision to start with. And controversially, sometimes it can even mean making it clear to customers that you’re not going to come to their house and make their dinner. The occasional “opinionated user interface design” (i.e. clunky workaround for edge cases) might sometimes be what you need to use to test a concept or buy you space to work on something more important.
Practical strategies for building financial products that stickSo what are the key strategies I've learned from my own experience and research?
- Start with a clear "why": What problem are you trying to solve? For whom? Make sure your mission is crystal clear before building anything. Make sure it aligns with your company’s objectives, too.
- Focus on a single, core feature and obsess on getting that right before moving on to something else: Resist the temptation to add too many features at once. Instead, choose one that delivers real value and iterate from there.
- Prioritize simplicity over complexity: Less is often more when it comes to financial products. Cut out unnecessary bells and whistles and keep the focus on what matters most.
- Embrace continuous iteration: Bedrock isn't a fixed destination—it's a dynamic process. Continuously gather user feedback, refine your product, and iterate towards that bedrock state.
- Stop, look and listen: Don't just test your product as part of your delivery process—test it repeatedly in the field. Use it yourself. Run A/B tests. Gather user feedback. Talk to people who use it, and refine accordingly.
There's an interesting paradox at play here: building towards bedrock means sacrificing some short-term growth potential in favour of long-term stability. But the payoff is worth it—products built with a focus on bedrock will outlast and outperform their competitors, and deliver sustained value to users over time.
So, how do you start your journey towards bedrock? Take it one step at a time. Start by identifying those core elements that truly matter to your users. Focus on building and refining a single, powerful feature that delivers real value. And above all, test obsessively—for, in the words of Abraham Lincoln, Alan Kay, or Peter Drucker (whomever you believe!!), “The best way to predict the future is to create it.”
User Research Is Storytelling
Ever since I was a boy, I’ve been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there’s an element of theater to UX—I hadn’t really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more.
Think of your favorite movie. More than likely it follows a three-act structure that’s commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.
Three-act structure in movies (© 2024 StudioBinder. Image used with permission from StudioBinder.). Use storytelling as a structure to do researchIt’s sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right” choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users’ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.
In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research.
Act one: setupThe setup is all about understanding the background, and that’s where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money.
Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: “‘Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography.” According to Hall, “[This] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction.”
This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from.
Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.
Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research.
This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.
Act two: conflictAct two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act.
Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.”
There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.
Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.
If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests.
That’s not to say that the “movies”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working.
The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions.
Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users (foundational research), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users' needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.
On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research.
In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.
Act three: resolutionWhile the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.
This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.
Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently.”
A persuasive story pattern.This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is”—the problems that you’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth.
You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!
While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research:
- Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
- Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
- Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures.
The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills.
So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.
To Ignite a Personalization Practice, Run this Prepersonalization Workshop
Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed.
Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan.
For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position.
But you can ensure that your team has packed its bags sensibly.
Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome.There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare.
We call it prepersonalization.
Behind the musicConsider Spotify’s DJ feature, which debuted this past year.
https://www.youtube.com/watch?v=ok-aNnc0DkoWe’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.
So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.
From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.
Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.
A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps:
- customer experience optimization (CXO, also known as A/B testing or experimentation)
- always-on automations (whether rules-based or machine-generated)
- mature features or standalone product development (such as Spotify’s DJ experience)
This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.
Set your kitchen timerHow long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.
The full arc of the wider workshop is threefold:
- Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
- Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
- Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.
Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.
Kickstart: Whet your appetiteWe call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.
Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.
This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework.
Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.
Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.
Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio.Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.
The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.
Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.
The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group.At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.
Hit that test kitchenNext, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?
What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.
Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan.The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.
The dishes will come from recipes, and those recipes have set ingredients.
In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan. Verify your ingredientsLike a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together.
This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team:
- compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette;
- specify a consistent set of interactions that users find uniform or familiar;
- and develop parity across performance measurements and key performance indicators too.
This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.
Compose your recipeWhat ingredients are important to you? Think of a who-what-when-why construct:
- Who are your key audience segments or groups?
- What kind of content will you give them, in what design elements, and under what circumstances?
- And for which business and user benefits?
We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.
Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below.
- Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
- Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
- Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.
A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.
You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.
Better kitchens require better architectureSimplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, “Complicated problems can be hard to solve, but they are addressable with rules and recipes.”
When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.
You can definitely stand the heat…Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.
This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.
While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.
The Wax and the Wane of the Web
I offer a single bit of advice to friends and family when they become new parents: When you start to think that you’ve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, it’s time for solid food, potty training, and overnight sleeping. When you figure those out, it’s time for preschool and rare naps. The cycle goes on and on.
The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, I’ve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.
How we got hereI built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table
elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font
tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.
At the turn of the century, a new cycle started. Crufty code littered with table
layouts and font
tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.
Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.
These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.
The web as software platformThe symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.
At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.
This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.
Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote “Of Time and the Web.” Or check out the “Web Design History Timeline” at the Web Design Museum. Neal Agarwal also has a fun tour through “Internet Artifacts.”
Where we are nowIn the last couple of years, it’s felt like we’ve begun to reach another major inflection point. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they’re still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.
Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.
Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.
If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), there’s often no alternative, leaving users with blank or broken pages.
Where do we go from here?Today’s hacks help to shape tomorrow’s standards. And there’s nothing inherently wrong with embracing hacks—for now—to move the present forward. Problems only arise when we’re unwilling to admit that they’re hacks or we hesitate to replace them. So what can we do to create the future we want for the web?
Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What’s the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it’s just a hack that you’ve grown accustomed to. And sometimes it’s holding you back from even better options.
Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isn’t always true of third-party frameworks. Sites built with even the hackiest of HTML from the ’90s still work just fine today. The same can’t always be said of sites built with frameworks even after just a couple years.
Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to “move fast and break things,” use the time saved by modern tools to consider more carefully and design with deliberation.
Always be learning. If you’re always learning, you’re also growing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. You might end up focusing on something that won’t matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.
Play, experiment, and be weird! This web that we’ve built is the ultimate experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we’re capable of.
Share and amplify. As you experiment, play, and learn, share what’s worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.
Go forth and makeAs designers and developers for the web (and beyond), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s imbue our values into the things that we create, and let’s make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you’ve mastered the web, everything will change.
Opportunities for AI in Accessibility
In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day.
Alternative textJoe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space.
As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win.
Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible.
While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:
- Do more people use smartphones or feature phones?
- How many more?
- Is there a group of people that don’t fall into either of these buckets?
- How many is that?
Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.
Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.
Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!
Matching algorithmsSafiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.
Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.
When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.
Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.
Other ways that AI can helps people with disabilitiesIf I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:
- Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential.
- Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
- Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading.
We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.
Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data.
Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon.
Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.
I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.
I am a creative.
I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.
I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is different—my being is different.
Apologizing and qualifying in advance is a distraction. That’s what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After I’ve said what I came to say. Which is hard enough.
Except when it is easy and flows like a river of wine.
Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you don’t work hard enough.
Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I don’t tell anyone for three days. Sometimes I’m so excited by the idea that came instantly that I blurt it out, can’t help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they don’t and I regret having given way to enthusiasm.
Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying we’re doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.
Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.
Don’t ask about process. I am a creative.I am a creative. I don’t control my dreams. And I don’t control my best ideas.
I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and there’s a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But that’s for poets to wonder, and I am not a poet. I am a creative. And it’s for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.
Sometimes the process is avoidance. And agony. You know the cliché about the tortured artist? It’s true, even when the artist (and let’s put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.
Some people who hate being called creative may be closeted creatives, but that’s between them and their gods. No offense meant. Your truth is true, too. But mine is for me.
Creatives recognize creatives.Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since it’s just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, that’s done. Continue.
Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, I’m no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips weren’t even fresh.
Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that.
I am a creative. I haven’t worked in advertising in 30 years, but in my nightmares, it’s my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.
I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work.
I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. It’s just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.
I am not an artist.I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissism—but at least we aren’t in politics.
I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what follows—the catastrophes as well as the triumphs.
I am a creative. Every word I’ve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.
I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.
I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.
Working saves me from worrying about work.I am a creative. I live in dread of my small gift suddenly going away.
I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.
I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didn’t take time to review or revise. I won’t do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say.
There. I think I’ve said it.
Humility: An Essential Value
Humility, a designer’s essential value—that has a nice ring to it. What about humility, an office manager’s essential value? Or a dentist’s? Or a librarian’s? They all sound great. When humility is our guiding light, the path is always open for fulfillment, evolution, connection, and engagement. In this chapter, we’re going to talk about why.
That said, this is a book for designers, and to that end, I’d like to start with a story—well, a journey, really. It’s a personal one, and I’m going to make myself a bit vulnerable along the way. I call it:
The Tale of Justin’s Preposterous PateWhen I was coming out of art school, a long-haired, goateed neophyte, print was a known quantity to me; design on the web, however, was rife with complexities to navigate and discover, a problem to be solved. Though I had been formally trained in graphic design, typography, and layout, what fascinated me was how these traditional skills might be applied to a fledgling digital landscape. This theme would ultimately shape the rest of my career.
So rather than graduate and go into print like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my senior year. I wanted—nay, needed—to better understand the underlying implications of what my design decisions would mean once rendered in a browser.
The late ’90s and early 2000s were the so-called “Wild West” of web design. Designers at the time were all figuring out how to apply design and visual communication to the digital landscape. What were the rules? How could we break them and still engage, entertain, and convey information? At a more macro level, how could my values, inclusive of humility, respect, and connection, align in tandem with that? I was hungry to find out.
Though I’m talking about a different era, those are timeless considerations between non-career interactions and the world of design. What are your core passions, or values, that transcend medium? It’s essentially the same concept we discussed earlier on the direct parallels between what fulfills you, agnostic of the tangible or digital realms; the core themes are all the same.
First within tables, animated GIFs, Flash, then with Web Standards, div
s, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and “browser requirement” pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.
For example, this iteration of my personal portfolio site (“the pseudoroom”) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy (now a co-founder of the creative project organizing app Milanote) on this one, where we’d first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, I’d break it down and code it into a digital layout.
Figure 1: “the pseudoroom” website, hitting the sketchbook metaphor hard.Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.
From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.
Figure 2: GUI Galaxy, web standards-compliant design news portalDesign news portals were incredibly popular during this period, featuring (what would now be considered) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, you’d have a design news portal from the late 90s / early 2000s.
We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.
The site’s backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a ‘brand’ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.
Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.
Now, why am I taking you down this trip of design memory lane? Two reasons.
First, there’s a reason for the nostalgia for that design era (the “Wild West” era, as I called it earlier): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.
Today’s web design has been in a period of stagnation. I suspect there’s a strong chance you’ve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images (laying the snark on heavy there), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.
Design, as it’s applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever they’re engaging from. We must be mindful of, and respectful toward, those concerns—but not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.
Pixel ProblemsWebsites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7.5, but 8 and 9 weren’t that different.
Figure 3: A Mac OS 7.5-centric desktop.Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the user’s desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping (fonts, extensions, control panels)—how did it also maintain cohesion amongst a group?
These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.
So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.
Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32x32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.
These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.
Figure 4: A selection of my pixel art design, 32x32 pixel canvas, 8-bit paletteThere’s one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.
This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well... it was the place to be, my friend. With respect where respect is due, GUI Galaxy’s concept was inspired by what these folks were doing.
Figure 5: The K10k websiteFor my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.
Amongst my personal work and side projects—and now with this inclusion—in the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:
I evolved—devolved, really—into a colossal asshole (and in just about a year out of art school, no less). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.
The casualties? My design stagnated. Its evolution—my evolution— stagnated.
I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources (and with blinders on). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.
My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.
Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the “reward” of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.
Always StudentsFollowing that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.
As an example, let’s talk about the Large Hadron Collider. The LHC was designed “to help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity.” Thanks, Wikipedia.
Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHC’s particle collision diagrams. These diagrams are the rendering of what’s actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.
Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,
I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.
I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.
So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.
An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words ‘grow’ and ‘evolve’ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.
But all that said: experience does not equal “expert.”
As soon as we close our minds via an inner monologue of ‘knowing it all’ or branding ourselves a “#thoughtleader” on social media, the designer we are is our final form. The designer we can be will never exist.
Personalization Pyramid: A Framework for Designing with User Data
As a UX professional in today’s data-driven landscape, it’s increasingly likely that you’ve been asked to design a personalized digital experience, whether it’s a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX.
That’s where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started).
Growing tools for personalization: According to a Dynamic Yield survey, 39% of respondents felt support is available on-demand when a business case is made for it (up 15% from 2020).
Source: “The State of Personalization Maturity – Q4 2021” Dynamic Yield conducted its annual maturity survey across roles and sectors in the Americas (AMER), Europe and the Middle East (EMEA), and the Asia-Pacific (APAC) regions. This marks the fourth consecutive year publishing our research, which includes more than 450 responses from individuals in the C-Suite, Marketing, Merchandising, CX, Product, and IT.
Getting StartedFor the sake of this article, we’ll assume you’re already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points.
Common scenarios for starting a personalization project:
- Your organization or client purchased a content management system (CMS) or marketing automation platform (MAP) or related technology that supports personalization
- The CMO, CDO, or CIO has identified personalization as a goal
- Customer data is disjointed or ambiguous
- You are running some isolated targeting campaigns or A/B testing
- Stakeholders disagree on personalization approach
- Mandate of customer privacy rules (e.g. GDPR) requires revisiting existing user targeting practices
Regardless of where you begin, a successful personalization program will require the same core building blocks. We’ve captured these as the “levels” on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful.
From the ground up: Soup-to-nuts personalization, without going nuts.From top to bottom, the levels include:
- North Star: What larger strategic objective is driving the personalization program?
- Goals: What are the specific, measurable outcomes of the program?
- Touchpoints: Where will the personalized experience be served?
- Contexts and Campaigns: What personalization content will the user see?
- User Segments: What constitutes a unique, usable audience?
- Actionable Data: What reliable and authoritative data is captured by our technical platform to drive personalization?
- Raw Data: What wider set of data is conceivably available (already in our setting) allowing you to personalize?
We’ll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. We’ve found them helpful in personalization brainstorming sessions, and will include examples for you here.
Personalization pack: Deck of cards to help kickstart your personalization brainstorming. Starting at the TopThe components of the pyramid are as follows:
North StarA north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include:
- Function: Personalize based on basic user inputs. Examples: “Raw” notifications, basic search results, system user settings and configuration options, general customization, basic optimizations
- Feature: Self-contained personalization componentry. Examples: “Cooked” notifications, advanced optimizations (geolocation), basic dynamic messaging, customized modules, automations, recommenders
- Experience: Personalized user experiences across multiple interactions and user flows. Examples: Email campaigns, landing pages, advanced messaging (i.e. C2C chat) or conversational interfaces, larger user flows and content-intensive optimizations (localization).
- Product: Highly differentiating personalized product experiences. Examples: Standalone, branded experiences with personalization at their core, like the “algotorial” playlists by Spotify such as Discover Weekly.
As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:
- Conversion
- Time on task
- Net promoter score (NPS)
- Customer satisfaction
Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user’s experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples:
Channel-level Touchpoints
- Email: Role
- Email: Time of open
- In-store display (JSON endpoint)
- Native app
- Search
Wireframe-level Touchpoints
- Web overlay
- Web alert bar
- Web banner
- Web content block
- Web menu
If you’re designing for web interfaces, for example, you will likely need to include personalized “zones” in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.
Targeted Zones: Examples from Kibo of personalized “zones” on page-level wireframes occurring at various stages of a user journey (Engagement phase at left and Purchase phase at right.)Source: “Essential Guide to End-to-End Personaliztion” by Kibo. Contexts and Campaigns
Once you’ve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as “campaigns” (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an “Enrich” campaign that shows related articles may be a suitable supplement to extant content).
Personalization Context Model:
- Browse
- Skim
- Nudge
- Feast
Personalization Content Model:
- Alert
- Make Easier
- Cross-Sell
- Enrich
We’ve written extensively about each of these models elsewhere, so if you’d like to read more you can check out Colin’s Personalization Content Model and Jeff’s Personalization Context Model.
Campaign and Context cards: This level of the pyramid can help your team focus around the types of personalization to deliver end users and the use-cases in which they will experience it. User SegmentsUser segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:
- Unknown
- Guest
- Authenticated
- Default
- Referred
- Role
- Cohort
- Unique ID
Every organization with any digital presence has data. It’s a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as “data activation.”) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience.
Source: “The State of Personalization 2021” by Twilio. Survey respondents were n=2,700 adult consumers who have purchased something online in the past 6 months, and n=300 adult manager+ decision-makers at consumer-facing companies that provide goods and/or services online. Respondents were from the United States, United Kingdom, Australia, and New Zealand.Data was collected from April 8 to April 20, 2021.First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the “creep factor” of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:
Figure 1.1.2: Example of a personalization maturity curve, showing progression from basic recommendations functionality to true individualization. Credit: https://kibocommerce.com/blog/kibos-personalization-maturity-chart/There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.
While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization’s overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.
Pulling it TogetherWhile the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.
In assembling a card “hand”, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article.
In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.
Scenario A: We want to use personalization to improve customer satisfaction on the website. For unknown users, we will create a short quiz to better identify what the user has come to do. This is sometimes referred to as “badging” a user in onboarding contexts, to better characterize their present intent and context. Lay Down Your CardsAny sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no “easy button” wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.
Mobile-First CSS: Is It Time for a Rethink?
The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right?
Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width
media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that?
On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project.
Advantages of mobile-firstSome of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense:
Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing.
Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well.
Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project).
Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!
Disadvantages of mobile-firstSetting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:
More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints.
Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.
Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.
The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width
media queries don’t leverage the browser’s capability to download CSS files in priority order.
There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity.
With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width
set).
This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view!
Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others.
Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement.
Closed media query ranges in practiceIn classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs:
- smaller than 768
- from 768 to below 1024
- 1024 and anything larger
Take a simple example where a block-level element has a default padding
of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop.
Classic min-width
mobile-first
.my-block {
padding: 20px;
@media (min-width: 768px) {
padding: 40px;
}
@media (min-width: 1024px) {
padding: 20px;
}
}
Closed media query range
.my-block {
padding: 20px;
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The subtle difference is that the mobile-first example sets the default padding
to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding
to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception).
The goal is to:
- Only set styles when needed.
- Not set them with the expectation of overwriting them later on, again and again.
To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited.
Taking the above example, if we find that .my-block
spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding
in a closed media query range.
.my-block {
@media (max-width: 767.98px) {
padding: 20px;
}
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The browser default padding
for our block is “0,” so instead of adding a desktop media query and using unset
or “0” for the padding
value (which we would need with mobile-first), we can wrap the mobile padding
in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding
style, as we want the browser default value.
Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority.
With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked.
Which HTTP version are you using?To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used.
Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.
Note: for a summarized comparison, see ImageKit’s “HTTP/2 vs. HTTP/1.”Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.
Splitting the CSSSeparating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media
attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.
In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority.
With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.
While, as noted, with the CSS separated into different files linked and marked up with the relevant media
attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width
queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow.
The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.
Bundled CSS
<link href="site.css" rel="stylesheet">
This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority.
Separated CSS
<link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet">
Separating the CSS and specifying a media
attribute value on each link
tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority.
Depending on the project’s deployment strategy, a change to one file (mobile.css
, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css
file, an approach that would normally trigger a full regression test.
The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.
I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive.
In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into.
Designers, (Re)define Success First
About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design that’s usable and equitable; protects people’s privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.
Unfortunately, we’re still very far from this ideal.
At the time, I didn’t know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and “dark reality” sessions, but I didn’t manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of design—which is far from my definition of structurally integrated.
I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that I’ve found the key that will let us structurally integrate ethics. And it’s surprisingly simple! But first we need to zoom out to get a better understanding of what we’re up against.
Influence the systemSadly, we’re trapped in a capitalistic system that reinforces consumerism and inequality, and it’s obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When we’re working for an organization that pursues “double-digit growth” or “aggressive sales targets” (which is 99 percent of us), that’s very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, we’re a part of the problem.
What can we do to change this?
We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:
- At the lowest level of effectiveness, you can affect numbers such as usability scores or the number of design critiques. But none of that will change the direction of a company.
- Similarly, affecting buffers (such as team budgets), stocks (such as the number of designers), flows (such as the number of new hires), and delays (such as the time that it takes to hear about the effect of design) won’t significantly affect a company.
- Focusing instead on feedback loops such as management control, employee recognition, or design-system investments can help a company become better at achieving its objectives. But that doesn’t change the objectives themselves, which means that the organization will still work against your ethical-design ideals.
- The next level, information flows, is what most ethical-design initiatives focus on now: the exchange of ethical methods, toolkits, articles, conferences, workshops, and so on. This is also where ethical design has remained mostly theoretical. We’ve been focusing on the wrong level of the system all this time.
- Take rules, for example—they beat knowledge every time. There can be widely accepted rules, such as how finance works, or a scrum team’s definition of done. But ethical design can also be smothered by unofficial rules meant to maintain profits, often revealed through comments such as “the client didn’t ask for it” or “don’t make it too big.”
- Changing the rules without holding official power is very hard. That’s why the next level is so influential: self-organization. Experimentation, bottom-up initiatives, passion projects, self-steering teams—all of these are examples of self-organization that improve the resilience and creativity of a company. It’s exactly this diversity of viewpoints that’s needed to structurally tackle big systemic issues like consumerism, wealth inequality, and climate change.
- Yet even stronger than self-organization are objectives and metrics. Our companies want to make more money, which means that everything and everyone in the company does their best to… make the company more money. And once I realized that profit is nothing more than a measurement, I understood how crucial a very specific, defined metric can be toward pushing a company in a certain direction.
The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.
Redefine successTraditionally, we consider a product or service successful if it’s desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, you’ll find diagrams of three equally sized, evenly arranged circles.
But in our hearts, we all know that the three dimensions aren’t equally weighted: it’s viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:
Desirability and feasibility are the means; viability is the goal. Companies—outside of nonprofits and charities—exist to make money.
A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilità.)
But simply swapping viable for desirable isn’t enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people want—whether it’s good for them or not. Desirability objectives, such as user satisfaction or conversion, don’t consider whether a product is healthy for people. They don’t prevent us from creating products that distract or manipulate people or stop us from contributing to society’s wealth inequality. They’re unsuitable for establishing a healthy balance with nature.
There’s a fourth dimension of success that’s missing: our designs also need to be ethical in the effect that they have on the world.
This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What I’ve never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. There’s no one way to do this because it highly depends on your culture, values, and industry. But I’ll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.
Pursue well-being, equity, and sustainabilityWe created objectives that address design’s effect on three levels: individual, societal, and global.
An objective on the individual level tells us what success is beyond the typical focus of usability and satisfaction—instead considering matters such as how much time and attention is required from users. We pursued well-being:
We create products and services that allow for people’s health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our users’ time, attention, and privacy, and help them make healthy and respectful choices.
An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:
We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.
Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:
We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.
In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practical—and even familiar—actions.
Measure impactBut defining these objectives still isn’t enough. What truly caught the attention of senior management was the fact that we created a way to measure every design project’s well-being, equity, and sustainability.
This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:
There’s a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:
“If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security.”
This phenomenon explains why desirability is a poor indicator of success: it’s typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?
There’s another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmness—say, the number of interface elements—we could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions.
Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, you’re forced to consider what success looks like concretely and how you can prove that you’ve reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.
And finally, it’s good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)—especially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, we’ll need to speak that business language.
Practice daily ethical designOnce you’ve defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It “simply” becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.
I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the website’s end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?
The redefinition of success will completely change what it means to do good design.
There is, however, a final piece of the puzzle that’s missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, it’s essential to engage stakeholders in a dedicated kickoff session.
Kick it off or fall back to status quoThe kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.
In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.
For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisors’ documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from “Manual of Me” (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.
The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the project—in terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?
Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. “As we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y.” Compare those odds to a situation in which the team didn’t agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scope—and she’d be right.
In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.
We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.
After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:
- the project’s origin and purpose: why are we doing this project?
- the problem definition: what do we want to solve?
- the concrete goals and metrics for each success dimension: what do we want to achieve?
- the scope, process, and role descriptions: how will we achieve it?
With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objective—using the tools, methods, and metrics at their disposal to achieve ethical outcomes.
ConclusionOver the past year, quite a few colleagues have asked me, “Where do I start with ethical design?” My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, there’s no skipping this step.
To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.
And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.
Otherwise, I’m genuinely sorry to say, you’re wasting your precious time and creative energy.
Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as “What will the client think of this?,” “Will they take me seriously?,” and “Can’t we just do it within the design team instead?” In fact, a product manager once asked me why ethics couldn’t just be a structured part of the design process—to just do it without spending the effort to define ethical objectives. It’s a tempting idea, right? We wouldn’t have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.
But as systems theory tells us, that’s not enough. For those of us who aren’t from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We can’t remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkits—we’re not designing ethically. It’s just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.
With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. That’s what it means to do daily ethical design.
For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, José Gallegos, Annegret Bönemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.
Breaking Out of the Box
CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.
Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.
Sketches of a round display, a common rectangular mobile display, and a device with a foldable display.These recent evolutions of the web platform made it both more challenging and more interesting to design products. They’re great opportunities for us to break out of our rectangular boxes.
I’d like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).
Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, they’re stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.
As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.
At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.
Here’s what a typical desktop PWA app looks like:
Sketches of two rectangular user interfaces representing the desktop Progressive Web App status quo on the macOS and Windows operating systems, respectively.Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but that’s about it.
What if we could think outside this box, and reclaim the real estate of the app’s entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.
This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.
About the title bar and window controlsLet’s start with an explanation of what the title bar and window controls are.
The title bar is the area displayed at the top of an app window, which usually contains the app’s name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the app’s window, and are also displayed at the top.
A sketch of a rectangular application user interface highlighting the title bar area and window control buttons.Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the application’s web content.
A sketch of a rectangular application user interface using Window Controls Overlay. The title bar and window controls are no longer in an area separated from the app’s content.If you are reading this article on a desktop computer, take a quick look at other apps. Chances are they’re already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.
A screenshot of the top area of a browser’s user interface showing a group of tabs that share the same horizontal space as the app window controls.Spotify displays album artwork all the way to the top edge of the application window.
A screenshot of an album in Spotify’s desktop application. Album artwork spans the entire width of the main content area, all the way to the top and right edges of the window, and the right edge of the main navigation area on the left side. The application and album navigation controls are overlaid directly on top of the album artwork.Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.
A screenshot of Microsoft Word’s toolbar interface. Document file information, search, and other functionality appear at the top of the window, sharing the same horizontal space as the app’s window controls.The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that don’t support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when it’s available.
Let’s use the featureFor the rest of this article, we’ll be working on a demo app to learn more about using the feature.
The demo app is called 1DIV. It’s a simple CSS playground where users can create designs using CSS and a single HTML element.
The app has two pages. The first lists the existing CSS designs you’ve created:
A screenshot of the 1DIV app displaying a thumbnail grid of CSS designs a user created.The second page enables you to create and edit CSS designs:
A screenshot of the 1DIV app editor page. The top half of the window displays a rendered CSS design, and a text editor on the bottom half of the window displays the CSS used to create it.Since I’ve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:
Screenshots of the 1DIV app thumbnail view and CSS editor view on macOS. This version of the app’s window has a separate control bar at the top for the app name and window control buttons.And on Windows:
Screenshots of the 1DIV app thumbnail view and CSS editor view on the Windows operating system. This version of the app’s window also has a separate control bar at the top for the app name and window control buttons.Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.
Let’s use the Window Controls Overlay feature to improve this.
Enabling Window Controls OverlayThe feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.
As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.
Using Window Controls OverlayTo use the feature, we need to add the following display_override member to our web app’s manifest file:
{
"name": "1DIV",
"description": "1DIV is a mini CSS playground",
"lang": "en-US",
"start_url": "/",
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display_override": [
"window-controls-overlay"
],
"icons": [
...
]
}
On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.
However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, we’ll need a bit of CSS and JavaScript code.
Here is what the app looks like now:
Screenshot of the 1DIV app thumbnail view using Window Controls Overlay on macOS. The separate top bar area is gone, but the window controls are now blocking some of the app’s interfaceThe title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.
It’s similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:
Screenshot of the 1DIV app thumbnail display using Window Controls Overlay on the Windows operating system. The separate top bar area is gone, but the window controls are now blocking some of the app’s content. Using CSS to keep clear of the window controlsAlong with the feature, new CSS environment variables have been introduced:
titlebar-area-x
titlebar-area-y
titlebar-area-width
titlebar-area-height
You use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it won’t overlap with the window controls. In our case, we’ll use two of the variables to position our header, which contains the logo, search bar, and NEW button.
header {
position: absolute;
left: env(titlebar-area-x, 0);
width: env(titlebar-area-width, 100%);
height: var(--toolbar-height);
}
The titlebar-area-x
variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width
is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesn’t include the window controls.)
By doing this, we make sure our content remains fully visible. We’re also defining fallback values (the second parameter in the env()
function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).
Now our header adapts to its surroundings, and it doesn’t feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.
Changing the window controls background color so it blends inNow let’s take a closer look at our second page: the CSS playground editor.
Screenshots of the 1DIV app CSS editor view with Window Controls Overlay in macOS and Windows, respectively. The window controls overlay areas have a solid white background color, which contrasts with the hot pink color of the example CSS design displayed in the editor.Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.
We can fix this by changing the app’s theme color. There are a couple of ways to define it:
- PWAs can define a theme color in the web app manifest file using the theme_color manifest member. This color is then used by the OS in different ways. On desktop platforms, it is used to provide a background color to the title bar and window controls.
- Websites can use the theme-color meta tag as well. It’s used by browsers to customize the color of the UI around the web page. For PWAs, this color can override the manifest
theme_color
.
In our case, we can set the manifest theme_color
to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.
The theme-color
meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.
Here is the function we’ll use:
function themeWindow(bgColor) {
document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
}
With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the app’s interface.
Screenshot of the 1DIV app CSS editor view on the Windows operating system with Window Controls Overlay and updated CSS demonstrating how the window control buttons blend in with the rest of the app’s interface. Dragging the windowNow, getting rid of the title bar entirely does have an important accessibility consequence: it’s much more difficult to move the application window around.
The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.
Fortunately, this can be fixed using CSS with the app-region
property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit-
vendor prefix.
To make any element of the app become a dragging target for the window, we can use the following:
-webkit-app-region: drag;
It is also possible to explicitly make an element non-draggable:
-webkit-app-region: no-drag;
These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.
However, because the editor page doesn’t display the header, users wouldn’t be able to drag the window while editing code. So let's use a different approach. We’ll create another element before our header, also absolutely positioned, and dedicated to dragging the window.
<div class="drag"></div>
<header>...</header>
.drag {
position: absolute;
top: 0;
width: 100%;
height: env(titlebar-area-height, 0);
-webkit-app-region: drag;
}
With the above code, we’re making the draggable area span the entire viewport width, and using the titlebar-area-height
variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.
And, now, to make sure our search field and button remain usable:
header .search,
header .new {
-webkit-app-region: no-drag;
}
With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and we’re not breaking this expectation, which is good.
An animated view of the 1DIV app being dragged across a Windows desktop with the mouse. Adapting to window resizeIt may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldn’t be enough space for the search field, logo, and button to fit, so we’d want to push them down a bit.
The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay
.
The API provides three interesting things:
navigator.windowControlsOverlay.visible
lets us know whether the overlay is visible.navigator.windowControlsOverlay.getBoundingClientRect()
lets us know the position and size of the title bar area.navigator.windowControlsOverlay.ongeometrychange
lets us know when the size or visibility changes.
Let’s use this to be aware of the size of the title bar area and move the header down if it’s too narrow.
if (navigator.windowControlsOverlay) {
navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
document.body.classList.toggle('narrow', width < 250);
});
}
In the example above, we set the narrow
class on the body
of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay
API has two advantages for our use case:
- It’s only fired when the feature is supported and used; we don’t want to adapt the design otherwise.
- We get the size of the title bar area across operating systems, which is great because the size of the window controls is different on Mac and Windows. Using a media query wouldn’t make it possible for us to know exactly how much space remains.
.narrow header {
top: env(titlebar-area-height, 0);
left: 0;
width: 100%;
}
Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.
A screenshot of the 1DIV app on Windows showing the app’s content adjusted for a much narrower viewport. Thirty pixels of exciting design opportunities
Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.
In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.
More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.
So let’s embrace this. Let’s use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!
If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the spec’s repository. It’s still early in the development of this feature, and you can help make it even better. Or, you can take a look at the feature’s existing documentation, or this demo app and its source code.
How to Sell UX Research with Two Simple Questions
Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, “Please, sir, I want some more.”
Here’s the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think it’s their idea.
In this article, I’ll show you how to collaboratively expose misalignment and gaps in the team’s shared understanding by bringing the team together around two simple questions:
- What are the objects?
- What are the relationships between those objects?
These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, what’s ORCA?! Glad you asked.
ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.
The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.
The four rounds and fifteen steps of the ORCA process. In the OOUX world, we love color-coding. Blue is reserved for objects! (Yellow is for core content, pink is for metadata, and green is for calls-to-action. Learn more about the color-coded object map and connecting CTAs to objects.)I sometimes say that ORCA is a “garbage in, garbage out” process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you don’t have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.
ORCA strengthens the weak spot between research and design by helping distill research into solid information architecture—scaffolding for the screen design and interaction design to hang on.In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.
Getting in the same curiosity-boatWhat gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.
Mark Twain
The first two steps of the ORCA process—Object Discovery and Relationship Discovery—shine a spotlight on the dark, dusty corners of your team’s misalignments and any inherent complexity that’s been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:
The original “Tree Swing Project Management” cartoon dates back to the 1960s or 1970s and has no artist attribution we could find.This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often can’t sell research: every decision-maker is confident in their own mental picture.
Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.
But how we do this is important. However much we might want to, we can’t just tell everyone, “YOU ARE WRONG!” Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.
Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.
You might try to sell research by honestly saying: “We need to understand doctors better! What are their pain points? How do they use the current app?” But here’s the problem with that. Those questions are vague, and the answers to them don’t feel acutely actionable.
Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Let’s listen in:
“Wait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?”
“Can a patient even have more than one primary doctor?”
“Is it a ‘primary doctor’ or just a ‘primary caregiver’… Can’t that role be a nurse practitioner?”
“No, caregivers are something else… That’s the patient’s family contacts, right?”
“So are caregivers in scope for this redesign?”
“Yeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note… Or on the appointment?”
Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidence—gently and diplomatically.
When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.
If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.
The two questionsBut how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably?
We can do this by starting with those two big questions that align to the first two steps of the ORCA process:
- What are the objects?
- What are the relationships between those objects?
In practice, getting to these answers is easier said than done. I’m going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these “seed” questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.
Prep work: Noun foragingIn the next section, I’ll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.
Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.
Here are just a few great noun foraging sources:
- the product’s marketing site
- the product’s competitors’ marketing sites (competitive analysis, anyone?)
- the existing product (look at labels!)
- user interview transcripts
- notes from stakeholder interviews or vision docs from stakeholders
Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.
As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if you’ll be creating an object map later!).
You’ll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:
- Structure
- Instances
- Purpose
Think of a library app, for example. Is “book” an object?
Structure: can you think of a few attributes for this potential object? Title, author, publish date… Yep, it has structure. Check!
Instance: what are some examples of this potential “book” object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops… OK, check!
Purpose: why is this object important to the users and business? Well, “book” is what our library client is providing to people and books are why people come to the library… Check, check, check!
SIP: Structure, Instances, and Purpose! (Here’s a flowchart where I elaborate even more on SIP.)As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickers—your UX system is not your design system! Components are just the packaging for objects—they are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.
Let’s say we work for a startup disrupting the email experience. This is how I’d start my noun foraging.
First I’d look at my own email client, which happens to be Gmail. I’d then look at Outlook and the new HEY email. I’d look at Yahoo, Hotmail…I’d even look at Slack and Basecamp and other so-called “email replacers.” I’d read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.
(Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just don’t get lost in your own emails! Come back to me!)
Drumroll, please…
Here are a few nouns I came up with during my noun foraging:
- email message
- thread
- contact
- client
- rule/automation
- email address that is not a contact?
- contact groups
- attachment
- Google doc file / other integrated file
- newsletter? (HEY treats this differently)
- saved responses and templates
Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google what’s googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.
Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:
- Record Locator
- Incentive Home
- Augmented Line Item
- Curriculum-Based Measurement Probe
This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.
Facilitate an Object Definition WorkshopYou could actually start your workshop with noun foraging—this activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the time’s up, come together and find the overlap. Affinity mapping is your friend here!
If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.
HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You don’t need to let them know that you’re looking for gaps in the team’s understanding so that you can prove the need for more user research—that will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers.
Then, let the question whack-a-mole commence.
1. What is this thing?Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.
As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an “open questions” parking lot. 😉
After definitions solidify, here’s a great follow-up:
2. Do our users know what these things are? What do users call this thing?Stakeholder 1: They probably call email clients “apps.” But I’m not sure.
Stakeholder 2: Automations are often called “workflows,” I think. Or, maybe users think workflows are something different.
If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the users’ language and mindset.
OK, moving on.
If you have two or more objects that seem to overlap in purpose, ask one of these questions:
3. Are these the same thing? Or are these different? If they are not the same, how are they different?You: Is a saved response the same as a template?
Stakeholder 1: Yes! Definitely.
Stakeholder 2: I don’t think so… A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images.
Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your “open questions” parking lot.
If you successfully determine that two similar things are, in fact, different, here’s your next follow-up question:
4. What’s the relationship between these objects?You: Are saved responses and templates related in any way?
Stakeholder 3: Yeah, a template can be applied to a saved response.
You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?
Listen. Capture uncertainty. Once the list of “open questions” grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldn’t make it to the workshop. And many questions will need to be labeled “user.”
Do you see how we are building up to our UXR sales pitch?
5. Is this object in scope?Your next question narrows the team’s focus toward what’s most important to your users. You can simply ask, “Are saved responses in scope for our first release?,” but I’ve got a better, more devious strategy.
By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisingly—or not so surprisingly—it’s not unusual for the VP to rank something like “saved responses” as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.
I did this for a startup a few years ago. We posted the three groups’ wildly different sort orders on the whiteboard.
Here’s a snippet of the very messy middle from this session: three columns of object cards, showing the same cards prioritized completely differently by three different groups.The CEO stood back, looked at it, and said, “This is why we haven’t been able to move forward in two years.”
Admittedly, it’s tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.
Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.
6. Create a visual representation of the objects’ relationshipsWe’ve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple “has a” and “has many” statements.
A work-in-progress system model of our new email solution.This system modeling activity brings up all sorts of new questions:
- Can a saved response have attachments?
- Can a saved response use a template? If so, if an email uses a saved response with a template, can the user override that template?
- Do users want to see all the emails they sent that included a particular attachment? For example, “show me all the emails I sent with ProfessionalImage.jpg attached. I’ve changed my professional photo and I want to alert everyone to update it.”
Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.
Light the fuseYou’ve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.
Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.
Here’s your final step. Take those questions you’ve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, “if we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?”
With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry.
Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6–8 user interviews focused specifically on these questions.
HOT TIP: if you are new to UX research, please note that you’ll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and don’t lead the user into any default answers.
Final words: Hold the screen design!Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?
I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world.
I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, there’s more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots.
All the best of luck! Now go sell research!
A Content Model Is Not a Design System
Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.
But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content.
I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces.
A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.
Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.
Two essential principles for an effective content modelWe needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:
- Content models must define semantics instead of layout.
- And content models should connect content that belongs together.
A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit.
When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.
A semantic content model has several benefits:
- Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand disruptive website redesigns.
- A semantic content model also provides a competitive edge. By adding structured data based on Schema.org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
- Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.
For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.
Content models that connectAfter struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.
Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.
To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future?
Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources.
Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.
A content model based on design components is unnecessarily complex, and it’s unintelligible to systems.We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.
In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.
A good content model connects content that belongs together so it can be easily managed and reused. ConclusionIn this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember:
- A design system isn’t a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
- If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.org–based structured data in your website. Even if additional delivery channels aren’t on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
- Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They’ll be able to create new designs without the obstacle of compatibility between the design and the content, and they’ll be ready for the next big thing.
By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience.
Design for Safety, An Excerpt
Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.
This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)
The process for inclusive safetyWhen you are designing for safety, your goals are to:
- identify ways your product can be used for abuse,
- design ways to prevent the abuse, and
- provide support for vulnerable users to reclaim power and control.
The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:
- Conducting research
- Creating archetypes
- Brainstorming problems
- Designing solutions
- Testing for safety
The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.
And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.
If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.
Step 1: Conduct researchDesign research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.
Broad researchYour project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.
Specific research: SurvivorsWhen possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.
Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6.
Specific research: AbusersIt’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.
Step 2: Create archetypesOnce you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.
The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.
Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses.The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)?
Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their home’s IoT devices against her, but in the face of his insistence that she simply doesn’t understand how to use the products, she’s unsure. She needs some kind of proof of the abuse.You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.
Fig 5.4: The survivor archetype Eric Mitchell knows he’s being stalked by his ex-boyfriend Rob but can’t figure out how Rob is learning his location information.It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals.
And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.
Step 3: Brainstorm problemsAfter creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.
How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.
If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.
After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.
It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.
Step 4: Design solutionsAt this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.
Some questions to ask yourself to help prevent harm and support your archetypes include:
- Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
- How can you make the victim aware that abuse is happening through your product?
- How can you help the victim understand what they need to do to make the problem stop?
- Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?
In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.
That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter.
Step 5: Test for safetyThe final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.
Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.
You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do.
Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.
As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.
Abuser testingThe goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.
For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.
If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.
Survivor testingSurvivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.
However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4.
Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.
Stress testingTo make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.
Sustainable Web Design, An Excerpt
In the 1950s, many in the elite running community had begun to believe it wasn’t possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasn’t built for the task.
But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, England—conditions no one expected to lend themselves to record-setting—and yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes.
This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannister’s record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.
We achieve far more when we believe that something is possible, and we will believe it’s possible only when we see someone else has already done it—and as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.
Establishing standards for a sustainable webIn most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we aren’t held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.
The primary goal in sustainable web design is to reduce carbon emissions. However, it’s almost impossible to actually measure the amount of CO2 produced by a web product. We can’t measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do?
If we can’t measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:
- Data transfer
- Carbon intensity of electricity
Let’s take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.
Data transferMost researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.
For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).
Fig 2.1: The Kinsta hosting dashboard displays data transfer alongside traffic volumes. If you divide data transfer by visits, you get the average data per visit, which can be used as a metric of efficiency.The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes.
Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile,” with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website.
History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.
Fig 2.2: The historical page weight data from HTTP Archive can teach us a lot about what is possible in the future.You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.
Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design.
We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website we’re replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.
If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they don’t need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.
Page weight budgets are easy to track throughout a design and development process. Although they don’t actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.
In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as we’ll see next, since all web products demand some power, it’s important to consider the source of that electricity, too.
Carbon intensity of electricityRegardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/kWh.
Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.
We don’t have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).
Fig 2.3: Tomorrow’s electricityMap shows live data for the carbon intensity of electricity by country.That said, we don’t want to locate our servers too far away from our users; it takes energy to transmit data through the telecom’s networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles”—and we want it to be as small as possible.
Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.
For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. That’s a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.
Converting it back to carbon emissionsIf we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.
If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.
Fig 2.4: The Website Carbon Calculator shows how the Riverford Organic website embodies their commitment to sustainability, being both low carbon and hosted in a data center using renewable energy.With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isn’t as intuitive—but carbon budgets do focus our minds on the primary thing we’re trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.
Browser EnergyData transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.
One part of the system we can look at in more detail is the energy used by end users’ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to users’ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the “thinking” work is done partly or entirely in the browser.
All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the user’s web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the user’s device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isn’t just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.
In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users’ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).
Fig 2.5: The Energy Impact meter in Safari (on the right) shows how a website consumes CPU energy.You know when you load a website and your computer’s cooling fans start spinning so frantically you think it might actually take off? That’s essentially what this tool is measuring.
It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.
Voice Content and Usability
We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.
Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.
In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.
Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.
Voice InteractionsWe interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:
- we need something done (such as a transaction),
- we want to know something (information of some sort), or
- we are social beings and want someone to talk to (conversation for conversation’s sake).
These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.
Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process (http://bkaprt.com/vcu36/01-01).
That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”).
Transactional voice interactionsUnless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).
Alison: Hey, how’s it going?
Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?
Alison: Can I get a Hawaiian pizza with extra pineapple?
Burhan: Sure, what size?
Alison: Large.
Burhan: Anything else?
Alison: No thanks, that’s it.
Burhan: Something to drink?
Alison: I’ll have a bottle of Coke.
Burhan: You got it. That’ll be $13.55 and about fifteen minutes.
Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries.
Informational voice interactionsMeanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more.
Alison: Hey, how’s it going?
Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?
Alison: Can I ask a few questions?
Burhan: Of course! Go right ahead.
Alison: Do you have any halal options on the menu?
Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?
Alison: What about gluten-free pizzas?
Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?
Alison: That’s it for now. Good to know. Thanks!
Burhan: Anytime, come back soon!
This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.
Voice InterfacesAt their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.
Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.
Interactive voice response (IVR) systemsThough written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.
IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).
While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction).
Screen readersParallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.
Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).
With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” (http://bkaprt.com/vcu36/01-05).
Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.
In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:
From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)
In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.
Voice assistantsWhen we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.
Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (http://bkaprt.com/vcu36/01-07, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.
Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.
At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.
Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri.As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.
Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.
Voice ContentSimply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t.
Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity.
For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?
Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:
A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)
I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.
As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.
Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.
Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.
Designing for the Unexpected
I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?
Flash, Photoshop, and responsive designWhen I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.
Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.
The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.
A new way to designDesigning responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:
.column-span-6 {
width: 49%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-4 {
width: 32%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-3 {
width: 24%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:
.logo {
@include colSpan(6);
}
.search {
@include colSpan(3);
}
.social-share {
@include colSpan(3);
}
Media queries
The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).
Components becoming too small at mobile breakpointsMedia queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on.
For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div
as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.
Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.
<section class="row">
<div class="column-span-4">1 of 7</div>
<div class="column-span-4">2 of 7</div>
<div class="column-span-4">3 of 7</div>
</section>
<section class="row">
<div class="column-span-4">4 of 7</div>
<div class="column-span-4">5 of 7</div>
<div class="column-span-4">6 of 7</div>
</section>
<section class="row">
<div class="column-span-4">7 of 7</div>
</section>
Components placed in the rows of a Sass grid
Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components.
Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist” goal.
Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?
Components responding to the viewport width with media queries Container queries: our savior or a false dawn?Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.
Components responding to their parent container with container queriesOne of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.
In other words, responsive components to replace responsive layouts.
Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.
My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?
A component library removed from context and real content is probably not the best place for that decision.
As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?
Cards responding to their parent container with container queries Cards responding based on their own contentIn this example, the dimensions of the container are not what should dictate the design; rather, the image is.
It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.
CSS is changingWhilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div
elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.
.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, 450px);
gap: 10px;
}
The repeat()
function paired with auto-fit
or auto-fill
allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space.
.wrapper {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
.child {
flex-basis: 32%;
margin-bottom: 20px;
}
The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.
A traditional Grid layout without the usual row containersThis is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid.
Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below?
Cards unable to respond to a sibling’s content changesSubgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.
Cards responding to content in sibling cards.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
grid-template-rows: auto 1fr auto;
gap: 10px;
}
.sub-grid {
display: grid;
grid-row: span 3;
grid-template-rows: subgrid; /* sets rows to parent grid */
}
CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query.
Intrinsic layoutsI’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space.
Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.
fr
units is a way to say I want you to distribute the extra space in this way, but...don’t ever make it smaller than the content that’s inside of it.
—Jen Simmons, “Designing Intrinsic Layouts”
Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.
Slide from “Designing Intrinsic Layouts” by Jen SimmonsWhat makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation.
We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.
Another 2010 moment?This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment.
But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.
One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase.
Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way.
You can’t framework your way out of a content problemAnother reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.
Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.
Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.
And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.
How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of.
The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?
Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.
Content firstContent is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.
Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line
and ::first-letter
help to separate design from markup so we can create designs that allow for changes.
Instead of old markup hacks like this—
<p>
<span class="first-line">First line of text with different styling</span>...
</p>
—we can target content based on where it appears.
.element::first-line {
font-size: 1.4em;
}
.element::first-letter {
color: red;
}
Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min()
, max()
,
and clamp()
.
This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.
In the Sass version, directional variables need to be set.
$direction: rtl;
$opposite-direction: ltr;
$start-direction: right;
$end-direction: left;
These variables can be used as values—
body {
direction: $direction;
text-align: $start-direction;
}
—or as properties.
margin-#{$end-direction}: 10px;
padding-#{$start-direction}: 10px;
However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.
margin-block-end: 10px;
padding-block-start: 10px;
There are also native start and end values for properties like text-align
, which means we can replace text-align: right
with text-align: start
.
Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.
Fixed and fluidWe briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min()
and max()
functions are a similar concept, allowing you to specify a fixed value with a flexible alternative.
For min()
this means setting a fluid minimum value and a maximum fixed value.
.element {
width: min(50%, 300px);
}
The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.
For max()
we can set a flexible max value and a minimum fixed value.
.element {
width: max(50%, 300px);
}
Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space.
The clamp()
function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.
.element {
width: clamp(300px, 50%, 600px);
}
This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.
With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.
Situation firstThanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “...situations you haven’t imagined”?
It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.
This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.
Thankfully, there is a lot we can do to provide choice.
Responsible design“There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”
“I Used the Web for a Day on a 50 MB Budget”
Chris Ashton
One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.
The srcset
attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.
<img
src="image-file.jpg"
srcset="large.jpg 1024w,
medium.jpg 640w,
small.jpg 320w"
alt="Image alt text" />
The preload
attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience.
<link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup-->
<link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup-->
There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.
<img src="image.png" loading="lazy" alt="…">
With srcset
, preload
, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make.
So how can we put users in control?
The return of media queriesMedia queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.
We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content.
As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.
For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.
@media (light-level: normal) {
--background-color: #fff;
--text-color: #0b0c0c;
}
@media (light-level: dim) {
--background-color: #efd226;
--text-color: #0b0c0c;
}
Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data
, prefers-color-scheme
, and prefers-reduced-motion
, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable.
Media queries like this go beyond choices made by a browser to grant more control to the user.
Expect the unexpectedIn the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.
We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products.
A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.
When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.
Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.
Asynchronous Design Critique: Getting Feedback
“Any comment?” is probably one of the worst ways to ask for feedback. It’s vague and open ended, and it doesn’t provide any indication of what we’re looking for. Getting good feedback starts earlier than we might expect: it starts with the request.
It might seem counterintuitive to start the process of receiving feedback with a question, but that makes sense if we realize that getting feedback can be thought of as a form of design research. In the same way that we wouldn’t do any research without the right questions to get the insights that we need, the best way to ask for feedback is also to craft sharp questions.
Design critique is not a one-shot process. Sure, any good feedback workflow continues until the project is finished, but this is particularly true for design because design work continues iteration after iteration, from a high level to the finest details. Each level needs its own set of questions.
And finally, as with any good research, we need to review what we got back, get to the core of its insights, and take action. Question, iteration, and review. Let’s look at each of those.
The questionBeing open to feedback is essential, but we need to be precise about what we’re looking for. Just saying “Any comment?”, “What do you think?”, or “I’d love to get your opinion” at the end of a presentation—whether it’s in person, over video, or through a written post—is likely to get a number of varied opinions or, even worse, get everyone to follow the direction of the first person who speaks up. And then... we get frustrated because vague questions like those can turn a high-level flows review into people instead commenting on the borders of buttons. Which might be a hearty topic, so it might be hard at that point to redirect the team to the subject that you had wanted to focus on.
But how do we get into this situation? It’s a mix of factors. One is that we don’t usually consider asking as a part of the feedback process. Another is how natural it is to just leave the question implied, expecting the others to be on the same page. Another is that in nonprofessional discussions, there’s often no need to be that precise. In short, we tend to underestimate the importance of the questions, so we don’t work on improving them.
The act of asking good questions guides and focuses the critique. It’s also a form of consent: it makes it clear that you’re open to comments and what kind of comments you’d like to get. It puts people in the right mental state, especially in situations when they weren’t expecting to give feedback.
There isn’t a single best way to ask for feedback. It just needs to be specific, and specificity can take many shapes. A model for design critique that I’ve found particularly useful in my coaching is the one of stage versus depth.
“Stage” refers to each of the steps of the process—in our case, the design process. In progressing from user research to the final design, the kind of feedback evolves. But within a single step, one might still review whether some assumptions are correct and whether there’s been a proper translation of the amassed feedback into updated designs as the project has evolved. A starting point for potential questions could derive from the layers of user experience. What do you want to know: Project objectives? User needs? Functionality? Content? Interaction design? Information architecture? UI design? Navigation design? Visual design? Branding?
Here’re a few example questions that are precise and to the point that refer to different layers:
- Functionality: Is automating account creation desirable?
- Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I might’ve missed.
- Information architecture: We have two competing bits of information on this page. Is the structure effective in communicating them both?
- UI design: What are your thoughts on the error counter at the top of the page that makes sure that you see the next error, even if the error is out of the viewport?
- Navigation design: From research, we identified these second-level navigation items, but once you’re on the page, the list feels too long and hard to navigate. Are there any suggestions to address this?
- Visual design: Are the sticky notifications in the bottom-right corner visible enough?
The other axis of specificity is about how deep you’d like to go on what’s being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially useful from one iteration to the next where it’s important to highlight the parts that have changed.
There are other things that we can consider when we want to achieve more specific—and more effective—questions.
A simple trick is to remove generic qualifiers from your questions like “good,” “well,” “nice,” “bad,” “okay,” and “cool.” For example, asking, “When the block opens and the buttons appear, is this interaction good?” might look specific, but you can spot the “good” qualifier, and convert it to an even better question: “When the block opens and the buttons appear, is it clear what the next action is?”
Sometimes we actually do want broad feedback. That’s rare, but it can happen. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or maybe just say, “At first glance, what do you think?” so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.
Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. It’s not something that I’d recommend in general, but I’ve found it useful to avoid falling again into rabbit holes of the sort that might lead to further refinement but aren’t what’s most important right now.
Asking specific questions can completely change the quality of the feedback that you receive. People with less refined critique skills will now be able to offer more actionable feedback, and even expert designers will welcome the clarity and efficiency that comes from focusing only on what’s needed. It can save a lot of time and frustration.
The iterationDesign iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Yet a lot of design tools with inline commenting tend to show changes as a single fluid stream in the same file, and those types of design tools make conversations disappear once they’re resolved, update shared UI components automatically, and compel designs to always show the latest version—unless these would-be helpful features were to be manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That’s probably not the best way to approach design critiques, but even if I don’t want to be too prescriptive here: that could work for some teams.
The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. I’m going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this structure can use this. By the way, when I refer to a “write-up or presentation,” I’m including video recordings or other media too: as long as it’s asynchronous, it works.
Using iteration posts has many advantages:
- It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
- It makes decisions visible for future review, and conversations are likewise always available.
- It creates a record of how the design changed over time.
- Depending on the tool, it might also make it easier to collect feedback and act on it.
These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And other feedback approaches (such as live critique, pair designing, or inline comments) can build from there.
I don’t think there’s a standard format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:
- The goal
- The design
- The list of changes
- The questions
Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. If I want to know about the latest design, the latest iteration post will have all that I need.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. So having posts that repeat information is actually very effective toward making sure that everyone is on the same page.
The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. In short, it’s any design artifact. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture.
It can also be useful to label the artifacts with clear titles because that can make it easier to refer to them. Write the post in a way that helps people understand the work. It’s not too different from organizing a good live presentation.
For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.
And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Doing this as a numbered list can also help make it easier to refer to each question by its number.
Not all iterations are the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then later, the iterations start settling on a solution and refining it until the design process reaches its end and the feature ships.
I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.
Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. This might look like a minor labelling tip, but it can help in multiple ways:
- Unique—It’s a clear unique marker. Within each project, one can easily say, “This was discussed in i4,” and everyone knows where they can go to review things.
- Unassuming—It works like versions (such as v1, v2, and v3) but in contrast, versions create the impression of something that’s big, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
- Future proof—It resolves the “final” naming problem that you can run into with versions. No more files named “final final complete no-really-its-done.” Within each project, the largest number always represents the latest iteration.
To mark when a design is complete enough to be worked on, even if there might be some bits still in need of attention and in turn more iterations needed, the wording release candidate (RC) could be used to describe it: “with i8, we reached RC” or “i12 is an RC.”
The reviewWhat usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This approach is particularly effective during live, synchronous feedback. But when we work asynchronously, it’s more effective to use a different approach: we can shift to a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
This shift has some major benefits that make asynchronous feedback particularly effective, especially around these friction points:
- It removes the pressure to reply to everyone.
- It reduces the frustration from swoop-by comments.
- It lessens our personal stake.
The first friction point is feeling a pressure to reply to every single comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s easy, and it doesn’t feel like a problem. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. Sometimes replying to all comments can be effective, but if we treat a design critique more like user research, we realize that we don’t have to reply to every comment, and in asynchronous spaces, there are alternatives:
- One is to let the next iteration speak for itself. When the design evolves and we post a follow-up iteration, that’s the reply. You might tag all the people who were involved in the previous discussion, but even that’s a choice, not a requirement.
- Another is to briefly reply to acknowledge each comment, such as “Understood. Thank you,” “Good points—I’ll review,” or “Thanks. I’ll include these in the next iteration.” In some cases, this could also be just a single top-level comment along the lines of “Thanks for all the feedback everyone—the next iteration is coming soon!”
- Another is to provide a quick summary of the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.
The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements—or of the previous iterations’ discussions. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. Swoop-by comments often trigger the simple thought “We’ve already discussed this…”, and it can be frustrating to have to repeat the same reply over and over.
Let’s begin by acknowledging again that there’s no need to reply to every comment. If, however, replying to a previously litigated point might be useful, a short reply with a link to the previous discussion for extra details is usually enough. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!
Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Sure, you’ll still be frustrated, but that might at least help in dealing with it.
The third friction point is the personal stake we could have with the design, which could make us feel defensive if the review were to feel more like a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego (because yes, even if we don’t want to admit it, it’s there). And ultimately, treating everything in aggregated form allows us to better prioritize our work.
Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer.
As the designer leading the project, you’re in charge of that decision. Ultimately, everyone has their specialty, and as the designer, you’re the one who has the most knowledge and the most context to make the right decision. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
Asynchronous Design Critique: Giving Feedback
Feedback, in whichever form it takes, and whatever it may be called, is one of the most effective soft skills that we have at our disposal to collaboratively get our designs to a better place while growing our own skills and perspectives.
Feedback is also one of the most underestimated tools, and often by assuming that we’re already good at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Poor feedback can create confusion in projects, bring down morale, and affect trust and team collaboration over the long term. Quality feedback can be a transformative force.
Practicing our skills is surely a good way to improve, but the learning gets even faster when it’s paired with a good foundation that channels and focuses the practice. What are some foundational aspects of giving good feedback? And how can feedback be adjusted for remote and distributed work environments?
On the web, we can identify a long tradition of asynchronous feedback: from the early days of open source, code was shared and discussed on mailing lists. Today, developers engage on pull requests, designers comment in their favorite design tools, project managers and scrum masters exchange ideas on tickets, and so on.
Design critique is often the name used for a type of feedback that’s provided to make our work better, collaboratively. So it shares a lot of the principles with feedback in general, but it also has some differences.
The contentThe foundation of every good critique is the feedback’s content, so that’s where we need to start. There are many models that you can use to shape your content. The one that I personally like best—because it’s clear and actionable—is this one from Lara Hogan.
While this equation is generally used to give feedback to people, it also fits really well in a design critique because it ultimately answers some of the core questions that we work on: What? Where? Why? How? Imagine that you’re giving some feedback about some design work that spans multiple screens, like an onboarding flow: there are some pages shown, a flow blueprint, and an outline of the decisions made. You spot something that could be improved. If you keep the three elements of the equation in mind, you’ll have a mental model that can help you be more precise and effective.
Here is a comment that could be given as a part of some feedback, and it might look reasonable at a first glance: it seems to superficially fulfill the elements in the equation. But does it?
Not sure about the buttons’ styles and hierarchy—it feels off. Can you change them?
Observation for design feedback doesn’t just mean pointing out which part of the interface your feedback refers to, but it also refers to offering a perspective that’s as specific as possible. Are you providing the user’s perspective? Your expert perspective? A business perspective? The project manager’s perspective? A first-time user’s perspective?
When I see these two buttons, I expect one to go forward and one to go back.
Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.
The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Lara’s equation she provides a second approach: request, which instead provides guidance toward a specific solution. While that’s a viable option for feedback in general, for design critiques, in my experience, defaulting to the question approach usually reaches the best solutions because designers are generally more comfortable in being given an open space to explore.
The difference between the two can be exemplified with, for the question approach:
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?
Or, for the request approach:
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.
At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
Choosing the question approach or the request approach can also at times be a matter of personal preference. A while ago, I was putting a lot of effort into improving my feedback: I did rounds of anonymous feedback, and I reviewed feedback with other people. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. To my shock, my next round of feedback from one specific person wasn’t that great. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. But now in this other team, there was one person who instead preferred specific guidance. So I adapted my feedback for them to include requests.
One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. No… but also yes. Let’s explore both sides.
No, this style of feedback is actually efficient because the length here is a byproduct of clarity, and spending time giving this kind of feedback can provide exactly enough information for a good fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just, “Let’s make sure that all screens have the same two forward and back buttons.” The designer receiving this feedback wouldn’t have much to go by, so they might just apply the change. In later iterations, the interface might change or they might introduce new features—and maybe that change might not make sense anymore. Without the why, the designer might imagine that the change is about consistency… but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.
Yes, this style of feedback is not always efficient because the points in some comments don’t always need to be exhaustive, sometimes because certain changes may be obvious (“The font used doesn’t follow our guidelines”) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.
So the equation above isn’t meant to suggest a strict template for feedback but a mnemonic to reflect and improve the practice. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.
The toneWell-grounded content is the foundation of feedback, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. Tone alone can make the difference between content that’s rejected or welcomed, and it’s been demonstrated that only positive feedback creates sustained change in people.
Since our goal is to be understood and to have a positive working environment, tone is essential to work on. Over the years, I’ve tried to summarize the required soft skills in a formula that mirrors the one for content: the receptivity equation.
Respectful feedback comes across as grounded, solid, and constructive. It’s the kind of feedback that, whether it’s positive or negative, is perceived as useful and fair.
Timing refers to when the feedback happens. To-the-point feedback doesn’t have much hope of being well received if it’s given at the wrong time. Questioning the entire high-level information architecture of a new feature when it’s about to ship might still be relevant if that questioning highlights a major blocker that nobody saw, but it’s way more likely that those concerns will have to wait for a later rework. So in general, attune your feedback to the stage of the project. Early iteration? Late iteration? Polishing work in progress? These all have different needs. The right timing will make it more likely that your feedback will be well received.
Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. That means checking before we write to see whether what we have in mind will truly help the person and make the project better overall. This might be a hard reflection at times because maybe we don’t want to admit that we don’t really appreciate that person. Hopefully that’s not the case, but that can happen, and that’s okay. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I be more constructive?
Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There might be many reasons for this: sometimes certain words might trigger specific reactions; sometimes nonnative speakers might not understand all the nuances of some sentences; sometimes our brains might just be different and we might perceive the world differently—neurodiversity must be taken into consideration. Whatever the reason, it’s important to review not just what we write but how.
A few years back, I was asking for some feedback on how I give feedback. I received some good advice but also a comment that surprised me. They pointed out that when I wrote “Oh, […],” I made them feel stupid. That wasn’t my intent! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified… but also thankful. I made a quick fix: I added “oh” in my list of replaced words (your choice between: macOS’s text replacement, aText, TextExpander, or others) so that when I typed “oh,” it was instantly deleted.
Something to highlight because it’s quite frequent—especially in teams that have a strong group spirit—is that people tend to beat around the bush. It’s important to remember here that a positive attitude doesn’t mean going light on the feedback—it just means that even when you provide hard, difficult, or challenging feedback, you do so in a way that’s respectful and constructive. The nicest thing that you can do for someone is to help them grow.
We have a great advantage in giving feedback in written form: it can be reviewed by another person who isn’t directly involved, which can help to reduce or remove any bias that might be there. I found that the best, most insightful moments for me have happened when I’ve shared a comment and I’ve asked someone who I highly trusted, “How does this sound?,” “How can I do it better,” and even “How would you have written it?”—and I’ve learned a lot by seeing the two versions side by side.
The formatAsynchronous feedback also has a major inherent advantage: we can take more time to refine what we’ve written to make sure that it fulfills two main goals: the clarity of communication and the actionability of the suggestions.
Let’s imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to do this, and of course context matters, but let’s try to think about some elements that may be useful to consider.
In terms of clarity, start by grounding the critique that you’re about to give by providing context. Specifically, this means describing where you’re coming from: do you have a deep knowledge of the project, or is this the first time that you’re seeing it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which user’s perspective are you taking when providing your feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?
Providing context is helpful even if you’re sharing feedback within a team that already has some information on the project. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.
We often focus on the negatives, trying to outline all the things that could be done better. That’s of course important, but it’s just as important—if not more—to focus on the positives, especially if you saw progress from the previous iteration. This might seem superfluous, but it’s important to keep in mind that design is a discipline where there are hundreds of possible solutions for every problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. As a bonus, positive feedback can also help reduce impostor syndrome.
There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo (compared to a previous iteration, competitors, or benchmarks) and why, and then on that foundation, you can add what could be improved. This is powerful because there’s a big difference between a critique that’s for a design that’s already in good shape and a critique that’s for a design that isn’t quite there yet.
Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. It’s “This button isn’t well aligned” versus “You haven’t aligned this button well.” This is very easy to change in your writing by reviewing it just before sending.
In terms of actionability, one of the best approaches to help the designer who’s reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. For longer pieces of feedback, you might also consider splitting it into sections or even across multiple comments. Of course, adding screenshots or signifying markers of the specific part of the interface you’re referring to can also be especially useful.
One approach that I’ve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. So a red square 🟥 means that it’s something that I consider blocking; a yellow diamond 🔶 is something that I can be convinced otherwise, but it seems to me that it should be changed; and a green circle 🟢 is a detailed, positive confirmation. I also use a blue spiral 🌀 for either something that I’m not sure about, an exploration, an open alternative, or just a note. But I’d use this approach only on teams where I’ve already established a good level of trust because if it happens that I have to deliver a lot of red squares, the impact could be quite demoralizing, and I’d reframe how I’d communicate that a bit.
Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:
- 🔶 Navigation—When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
- 🟢 Overall—I think the page is solid, and this is good enough to be our release candidate for a version 1.0.
- 🟢 Metrics—Good improvement in the buttons on the metrics area; the improved contrast and new focus style make them more accessible.
- 🟥 Button Style—Using the green accent in this context creates the impression that it’s a positive action because green is usually perceived as a confirmation color. Do we need to explore a different color?
- 🔶Tiles—Given the number of items on the page, and the overall page hierarchy, it seems to me that the tiles shouldn’t be using the Subtitle 1 style but the Subtitle 2 style. This will keep the visual hierarchy more consistent.
- 🌀 Background—Using a light texture works well, but I wonder whether it adds too much noise in this kind of page. What is the thinking in using that?
What about giving feedback directly in Figma or another design tool that allows in-place feedback? In general, I find these difficult to use because they hide discussions and they’re harder to track, but in the right context, they can be very effective. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.
One final note: say the obvious. Sometimes we might feel that something is obviously good or obviously wrong, and so we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it—that’s okay. You might have to reword it a little bit to make the reader feel more comfortable, but don’t hold it back. Good feedback is transparent, even when it may be obvious.
There’s another advantage of asynchronous feedback: written feedback automatically tracks decisions. Especially in large projects, “Why did we do this?” could be a question that pops up from time to time, and there’s nothing better than open, transparent discussions that can be reviewed at any time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved.
Content, tone, and format. Each one of these subjects provides a useful model, but working to improve eight areas—observation, impact, question, timing, attitude, form, clarity, and actionability—is a lot of work to put in all at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others) and start there. Then the second, then the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
That’s Not My Burnout
Are you like me, reading about people fading away as they burn out, and feeling unable to relate? Do you feel like your feelings are invisible to the world because you’re experiencing burnout differently? When burnout starts to push down on us, our core comes through more. Beautiful, peaceful souls get quieter and fade into that distant and distracted burnout we’ve all read about. But some of us, those with fires always burning on the edges of our core, get hotter. In my heart I am fire. When I face burnout I double down, triple down, burning hotter and hotter to try to best the challenge. I don’t fade—I am engulfed in a zealous burnout.
So what on earth is a zealous burnout?Imagine a woman determined to do it all. She has two amazing children whom she, along with her husband who is also working remotely, is homeschooling during a pandemic. She has a demanding client load at work—all of whom she loves. She gets up early to get some movement in (or often catch up on work), does dinner prep as the kids are eating breakfast, and gets to work while positioning herself near “fourth grade” to listen in as she juggles clients, tasks, and budgets. Sound like a lot? Even with a supportive team both at home and at work, it is.
Sounds like this woman has too much on her plate and needs self-care. But no, she doesn’t have time for that. In fact, she starts to feel like she’s dropping balls. Not accomplishing enough. There’s not enough of her to be here and there; she is trying to divide her mind in two all the time, all day, every day. She starts to doubt herself. And as those feelings creep in more and more, her internal narrative becomes more and more critical.
Suddenly she KNOWS what she needs to do! She should DO MORE.
This is a hard and dangerous cycle. Know why? Because once she doesn’t finish that new goal, that narrative will get worse. Suddenly she’s failing. She isn’t doing enough. SHE is not enough. She might fail, she might fail her family...so she’ll find more she should do. She doesn’t sleep as much, move as much, all in the efforts to do more. Caught in this cycle of trying to prove herself to herself, never reaching any goal. Never feeling “enough.”
So, yeah, that’s what zealous burnout looks like for me. It doesn’t happen overnight in some grand gesture but instead slowly builds over weeks and months. My burning out process looks like speeding up, not a person losing focus. I speed up and up and up...and then I just stop.
I am the one who couldIt’s funny the things that shape us. Through the lens of childhood, I viewed the fears, struggles, and sacrifices of someone who had to make it all work without having enough. I was lucky that my mother was so resourceful and my father supportive; I never went without and even got an extra here or there.
Growing up, I did not feel shame when my mother paid with food stamps; in fact, I’d have likely taken on any debate on the topic, verbally eviscerating anyone who dared to criticize the disabled woman trying to make sure all our needs were met with so little. As a child, I watched the way the fear of not making those ends meet impacted people I love. As the non-disabled person in my home, I would take on many of the physical tasks because I was “the one who could” make our lives a little easier. I learned early to associate fears or uncertainty with putting more of myself into it—I am the one who can. I learned early that when something frightens me, I can double down and work harder to make it better. I can own the challenge. When people have seen this in me as an adult, I’ve been told I seem fearless, but make no mistake, I’m not. If I seem fearless, it’s because this behavior was forged from other people’s fears.
And here I am, more than 30 years later still feeling the urge to mindlessly push myself forward when faced with overwhelming tasks ahead of me, assuming that I am the one who can and therefore should. I find myself driven to prove that I can make things happen if I work longer hours, take on more responsibility, and do more.
I do not see people who struggle financially as failures, because I have seen how strong that tide can be—it pulls you along the way. I truly get that I have been privileged to be able to avoid many of the challenges that were present in my youth. That said, I am still “the one who can” who feels she should, so if I were faced with not having enough to make ends meet for my own family, I would see myself as having failed. Though I am supported and educated, most of this is due to good fortune. I will, however, allow myself the arrogance of saying I have been careful with my choices to have encouraged that luck. My identity stems from the idea that I am “the one who can” so therefore feel obligated to do the most. I can choose to stop, and with some quite literal cold water splashed in my face, I’ve made the choice to before. But that choosing to stop is not my go-to; I move forward, driven by a fear that is so a part of me that I barely notice it’s there until I’m feeling utterly worn away.
So why all the history? You see, burnout is a fickle thing. I have heard and read a lot about burnout over the years. Burnout is real. Especially now, with COVID, many of us are balancing more than we ever have before—all at once! It’s hard, and the procrastinating, the avoidance, the shutting down impacts so many amazing professionals. There are important articles that relate to what I imagine must be the majority of people out there, but not me. That’s not what my burnout looks like.
The dangerous invisibility of zealous burnoutA lot of work environments see the extra hours, extra effort, and overall focused commitment as an asset (and sometimes that’s all it is). They see someone trying to rise to challenges, not someone stuck in their fear. Many well-meaning organizations have safeguards in place to protect their teams from burnout. But in cases like this, those alarms are not always tripped, and then when the inevitable stop comes, some members of the organization feel surprised and disappointed. And sometimes maybe even betrayed.
Parents—more so mothers, statistically speaking—are praised as being so on top of it all when they can work, be involved in the after-school activities, practice self-care in the form of diet and exercise, and still meet friends for coffee or wine. During COVID many of us have binged countless streaming episodes showing how it’s so hard for the female protagonist, but she is strong and funny and can do it. It’s a “very special episode” when she breaks down, cries in the bathroom, woefully admits she needs help, and just stops for a bit. Truth is, countless people are hiding their tears or are doom-scrolling to escape. We know that the media is a lie to amuse us, but often the perception that it’s what we should strive for has penetrated much of society.
Women and burnoutI love men. And though I don’t love every man (heads up, I don’t love every woman or nonbinary person either), I think there is a beautiful spectrum of individuals who represent that particular binary gender.
That said, women are still more often at risk of burnout than their male counterparts, especially in these COVID stressed times. Mothers in the workplace feel the pressure to do all the “mom” things while giving 110%. Mothers not in the workplace feel they need to do more to “justify” their lack of traditional employment. Women who are not mothers often feel the need to do even more because they don’t have that extra pressure at home. It’s vicious and systemic and so a part of our culture that we’re often not even aware of the enormity of the pressures we put on ourselves and each other.
And there are prices beyond happiness too. Harvard Health Publishing released a study a decade ago that “uncovered strong links between women’s job stress and cardiovascular disease.” The CDC noted, “Heart disease is the leading cause of death for women in the United States, killing 299,578 women in 2017—or about 1 in every 5 female deaths.”
This relationship between work stress and health, from what I have read, is more dangerous for women than it is for their non-female counterparts.
But what if your burnout isn’t like that either?That might not be you either. After all, each of us is so different and how we respond to stressors is too. It’s part of what makes us human. Don’t stress what burnout looks like, just learn to recognize it in yourself. Here are a few questions I sometimes ask friends if I am concerned about them.
Are you happy? This simple question should be the first thing you ask yourself. Chances are, even if you’re burning out doing all the things you love, as you approach burnout you’ll just stop taking as much joy from it all.
Do you feel empowered to say no? I have observed in myself and others that when someone is burning out, they no longer feel they can say no to things. Even those who don’t “speed up” feel pressure to say yes to not disappoint the people around them.
What are three things you’ve done for yourself? Another observance is that we all tend to stop doing things for ourselves. Anything from skipping showers and eating poorly to avoiding talking to friends. These can be red flags.
Are you making excuses? Many of us try to disregard feelings of burnout. Over and over I have heard, “It’s just crunch time,” “As soon as I do this one thing, it will all be better,” and “Well I should be able to handle this, so I’ll figure it out.” And it might really be crunch time, a single goal, and/or a skill set you need to learn. That happens—life happens. BUT if this doesn’t stop, be honest with yourself. If you’ve worked more 50-hour weeks since January than not, maybe it’s not crunch time—maybe it’s a bad situation that you’re burning out from.
Do you have a plan to stop feeling this way? If something is truly temporary and you do need to just push through, then it has an exit route with a
defined end.
Take the time to listen to yourself as you would a friend. Be honest, allow yourself to be uncomfortable, and break the thought cycles that prevent you from healing.
So now what?What I just described is a different path to burnout, but it’s still burnout. There are well-established approaches to working through burnout:
- Get enough sleep.
- Eat healthy.
- Work out.
- Get outside.
- Take a break.
- Overall, practice self-care.
Those are hard for me because they feel like more tasks. If I’m in the burnout cycle, doing any of the above for me feels like a waste. The narrative is that if I’m already failing, why would I take care of myself when I’m dropping all those other balls? People need me, right?
If you’re deep in the cycle, your inner voice might be pretty awful by now. If you need to, tell yourself you need to take care of the person your people depend on. If your roles are pushing you toward burnout, use them to help make healing easier by justifying the time spent working on you.
To help remind myself of the airline attendant message about putting the mask on yourself first, I have come up with a few things that I do when I start feeling myself going into a zealous burnout.
Cook an elaborate meal for someone!OK, I am a “food-focused” individual so cooking for someone is always my go-to. There are countless tales in my home of someone walking into the kitchen and turning right around and walking out when they noticed I was “chopping angrily.” But it’s more than that, and you should give it a try. Seriously. It’s the perfect go-to if you don’t feel worthy of taking time for yourself—do it for someone else. Most of us work in a digital world, so cooking can fill all of your senses and force you to be in the moment with all the ways you perceive the world. It can break you out of your head and help you gain a better perspective. In my house, I’ve been known to pick a place on the map and cook food that comes from wherever that is (thank you, Pinterest). I love cooking Indian food, as the smells are warm, the bread needs just enough kneading to keep my hands busy, and the process takes real attention for me because it’s not what I was brought up making. And in the end, we all win!
Vent like a foul-mouthed foolBe careful with this one!
I have been making an effort to practice more gratitude over the past few years, and I recognize the true benefits of that. That said, sometimes you just gotta let it all out—even the ugly. Hell, I’m a big fan of not sugarcoating our lives, and that sometimes means that to get past the big pile of poop, you’re gonna wanna complain about it a bit.
When that is what’s needed, turn to a trusted friend and allow yourself some pure verbal diarrhea, saying all the things that are bothering you. You need to trust this friend not to judge, to see your pain, and, most importantly, to tell you to remove your cranium from your own rectal cavity. Seriously, it’s about getting a reality check here! One of the things I admire the most about my husband (though often after the fact) is his ability to break things down to their simplest. “We’re spending our lives together, of course you’re going to disappoint me from time to time, so get over it” has been his way of speaking his dedication, love, and acceptance of me—and I could not be more grateful. It also, of course, has meant that I needed to remove my head from that rectal cavity. So, again, usually those moments are appreciated in hindsight.
Pick up a book!There are many books out there that aren’t so much self-help as they are people just like you sharing their stories and how they’ve come to find greater balance. Maybe you’ll find something that speaks to you. Titles that have stood out to me include:
- Thrive by Arianna Huffington
- Tools of Titans by Tim Ferriss
- Girl, Stop Apologizing by Rachel Hollis
- Dare to Lead by Brené Brown
Or, another tactic I love to employ is to read or listen to a book that has NOTHING to do with my work-life balance. I’ve read the following books and found they helped balance me out because my mind was pondering their interesting topics instead of running in circles:
- The Drunken Botanist by Amy Stewart
- Superlife by Darin Olien
- A Brief History of Everyone Who Ever Lived by Adam Rutherford
- Gaia’s Garden by Toby Hemenway
If you’re not into reading, pick up a topic on YouTube or choose a podcast to subscribe to. I’ve watched countless permaculture and gardening topics in addition to how to raise chickens and ducks. For the record, I do not have a particularly large food garden, nor do I own livestock of any kind...yet. I just find the topic interesting, and it has nothing to do with any aspect of my life that needs anything from me.
Forgive yourselfYou are never going to be perfect—hell, it would be boring if you were. It’s OK to be broken and flawed. It’s human to be tired and sad and worried. It’s OK to not do it all. It’s scary to be imperfect, but you cannot be brave if nothing were scary.
This last one is the most important: allow yourself permission to NOT do it all. You never promised to be everything to everyone at all times. We are more powerful than the fears that drive us.
This is hard. It is hard for me. It’s what’s driven me to write this—that it’s OK to stop. It’s OK that your unhealthy habit that might even benefit those around you needs to end. You can still be successful in life.
I recently read that we are all writing our eulogy in how we live. Knowing that your professional accomplishments won’t be mentioned in that speech, what will yours say? What do you want it to say?
Look, I get that none of these ideas will “fix it,” and that’s not their purpose. None of us are in control of our surroundings, only how we respond to them. These suggestions are to help stop the spiral effect so that you are empowered to address the underlying issues and choose your response. They are things that work for me most of the time. Maybe they’ll work for you.
Does this sound familiar?If this sounds familiar, it’s not just you. Don’t let your negative self-talk tell you that you “even burn out wrong.” It’s not wrong. Even if rooted in fear like my own drivers, I believe that this need to do more comes from a place of love, determination, motivation, and other wonderful attributes that make you the amazing person you are. We’re going to be OK, ya know. The lives that unfold before us might never look like that story in our head—that idea of “perfect” or “done” we’re looking for, but that’s OK. Really, when we stop and look around, usually the only eyes that judge us are in the mirror.
Do you remember that Winnie the Pooh sketch that had Pooh eat so much at Rabbit’s house that his buttocks couldn’t fit through the door? Well, I already associate a lot with Rabbit, so it came as no surprise when he abruptly declared that this was unacceptable. But do you recall what happened next? He put a shelf across poor Pooh’s ankles and decorations on his back, and made the best of the big butt in his kitchen.
At the end of the day we are resourceful and know that we are able to push ourselves if we need to—even when we are tired to our core or have a big butt of fluff ‘n’ stuff in our room. None of us has to be afraid, as we can manage any obstacle put in front of us. And maybe that means we will need to redefine success to allow space for being uncomfortably human, but that doesn’t really sound so bad either.
So, wherever you are right now, please breathe. Do what you need to do to get out of your head. Forgive and take care.

Performant, sleek and elegant.
Swift 6 suitable notification observers in iOS
- iOS
- Swift
The author discusses challenges managing side projects, specifically updating SignalPath to Swift 6. They encountered errors related to multiple notification observations but resolved them by shifting to publishers, avoiding sendable closure issues. Although the new approach risks background thread notifications, the compiler is satisfied with the adjustments made to the code.
I have a couple of side projects going on, although it is always a challenge to find time of them. One of them, SignalPath, is what I created back in 2015. Currently, I have been spending some time to bump the Swift version to 6 which brought a quite a list of errors. In many places I had code what dealt with observing multiple notifications, but of course Swift 6 was not happy about it. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let handler: (Notification) -> Void = { [weak self] notification in self?.keyboardInfo = Info(notification: notification) } let names: [Notification.Name] = [ UIResponder.keyboardWillShowNotification, UIResponder.keyboardWillHideNotification, UIResponder.keyboardWillChangeFrameNotification ] observers = names.map({ name -> NSObjectProtocol in return NotificationCenter.default.addObserver(forName: name, object: nil, queue: .main, using: handler) // Converting non-sendable function value to '@Sendable (Notification) -> Void' may introduce data races }) view raw Observer.swift hosted with ❤ by GitHub After moving all of the notification observing to publishers instead, I can ignore the whole sendable closure problem all together. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters Publishers.Merge3( NotificationCenter.default.publisher(for: UIResponder.keyboardWillShowNotification), NotificationCenter.default.publisher(for: UIResponder.keyboardWillHideNotification), NotificationCenter.default.publisher(for: UIResponder.keyboardWillChangeFrameNotification) ) .map(Info.init) .assignWeakly(to: \.keyboardInfo, on: self) .store(in: ¬ificationCancellables) view raw Observer.swift hosted with ❤ by GitHub Great, compiler is happy again although this code could cause trouble if any of the notifications are posted from a background thread. But since this is not a case here, I went for skipping .receive(on: DispatchQueue.main). Assign weakly is a custom operator and the implementation looks like this: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public extension Publisher where Self.Failure == Never { func assignWeakly<Root>(to keyPath: ReferenceWritableKeyPath<Root, Self.Output>, on object: Root) -> AnyCancellable where Root: AnyObject { return sink { [weak object] value in object?[keyPath: keyPath] = value } } } view raw Combine+Weak.swift hosted with ❤ by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
AnyClass protocol and Objective-C methods
- iOS
- Swift
- AnyClass
AnyClass is a protocol all classes conform to and it comes with a feature I was not aware of. But first, how to I ended up with using AnyClass. While working on code using CoreData, I needed a way to enumerate all the CoreData entities and call a static function on them. If that function […]
AnyClass is a protocol all classes conform to and it comes with a feature I was not aware of. But first, how to I ended up with using AnyClass. While working on code using CoreData, I needed a way to enumerate all the CoreData entities and call a static function on them. If that function is defined, it runs an entity specific update. Let’s call the function static func resetState(). It is easy to get the list of entity names of the model and then turn them into AnyClass instances using the NSClassFromString() function. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let entityClasses = managedObjectModel.entities .compactMap(\.name) .compactMap { NSClassFromString($0) } view raw AnyClass.swift hosted with ❤ by GitHub At this point I had an array of AnyClass instances where some of them implemented the resetState function, some didn’t. While browsing the AnyClass documentation, I saw this: You can use the AnyClass protocol as the concrete type for an instance of any class. When you do, all known @objcclass methods and properties are available as implicitly unwrapped optional methods and properties, respectively. Never heard about it, probably because I have never really needed to interact with AnyClass in such way. Therefore, If I create an @objc static function then I can call it by unwrapping it with ?. Without unwrapping it safely, it would crash because Department type does not implement the function. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters class Department: NSManagedObject { } class Employee: NSManagedObject { @objc static func resetState() { print("Resetting Employee") } } // This triggers Employee.resetState and prints the message to the console for entityClass in entityClasses { entityClass.resetState?() } view raw AnyClass.swift hosted with ❤ by GitHub It has been awhile since I wrote any Objective-C code, but its features leaking into Swift helped me out here. Reminds me of days filled with respondsToSelector and performSelector. If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
AnyView is everywhere in Xcode 16
- iOS
- Xcode
- Swift
Xcode 16 introduces a new execution engine for Previews, enhancing project configuration support and improving performance by up to 30%. However, it wraps SwiftUI views in AnyView for debug builds, which can hinder optimization. Users can override this behavior with a custom build setting to maintain performance in debugging.
Loved to see this entry in Xcode 16’s release notes: Xcode 16 brings a new execution engine for Previews that supports a larger range of projects and configurations. Now with shared build products between Build and Run and Previews, switching between the two is instant. Performance between edits in the source code is also improved for many projects, with increases up to 30%. It has been difficult at times to use SwiftUI previews when they sometimes just stop working with error messages leaving scratch head. Turns out, it comes with a hidden cost of Xcode 16 wrapping views with AnyView in debug builds which takes away performance. If you don’t know it only affects debug builds, one could end up on journey of trying to improve the performance for debug builds and making things worse for release builds. Not sure if this was ever mentioned in any of the WWDC videos, but feels like this kind of change should have been highlighted. As of Xcode 16, every SwiftUI view is wrapped in an AnyView _in debug builds only_. This speeds switching between previews, simulator, and device, but subverts some List optimizations. Add this custom build setting to the project to override the new behavior: `SWIFT_ENABLE_OPAQUE_TYPE_ERASURE=NO` Wrapping in Equatable is likely to make performance worse as it introduces an extra view in the hierarchy for every row. Curt Clifton on Mastodon Fortunately, one can turn off this if this becomes an issue in debug builds. If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Sorting arrays in Swift: multi-criteria
- Foundation
- iOS
- Swift
- localizedCaseInsensitiveCompare
- sort
- sorted(by:)
Swift’s foundation library provides a sorted(by:) function for sorting arrays. The areInIncreasingOrder closure needs to return true if the closure’s arguments are increasing, false otherwise. How to use the closure for sorting by multiple criteria? Let’s take a look at an example of sorting an array of Player structs. As said before, the closure should […]
Swift’s foundation library provides a sorted(by:) function for sorting arrays. The areInIncreasingOrder closure needs to return true if the closure’s arguments are increasing, false otherwise. How to use the closure for sorting by multiple criteria? Let’s take a look at an example of sorting an array of Player structs. Sort by score in descending order Sort by name in ascending order Sort by id in ascending order This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Player { let id: Int let name: String let score: Int } extension Player: CustomDebugStringConvertible { var debugDescription: String { "id=\(id) name=\(name) score=\(score)" } } let players: [Player] = [ Player(id: 0, name: "April", score: 7), Player(id: 1, name: "Nora", score: 8), Player(id: 2, name: "Joe", score: 5), Player(id: 3, name: "Lisa", score: 4), Player(id: 4, name: "Michelle", score: 6), Player(id: 5, name: "Joe", score: 5), Player(id: 6, name: "John", score: 7) ] view raw Sort.swift hosted with ❤ by GitHub As said before, the closure should return true if the left element should be ordered before the right element. If they happen to be equal, we should use the next sorting criteria. For comparing strings, we’ll go for case-insensitive sorting using Foundation’s built-in localizedCaseInsensitiveCompare. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let sorted = players.sorted { lhs, rhs in if lhs.score == rhs.score { let nameOrdering = lhs.name.localizedCaseInsensitiveCompare(rhs.name) if nameOrdering == .orderedSame { return lhs.id < rhs.id } else { return nameOrdering == .orderedAscending } } else { return lhs.score > rhs.score } } print(sorted.map(\.debugDescription).joined(separator: "\n")) // id=1 name=Nora score=8 // id=0 name=April score=7 // id=6 name=John score=7 // id=4 name=Michelle score=6 // id=2 name=Joe score=5 // id=5 name=Joe score=5 // id=3 name=Lisa score=4 view raw Sort.swift hosted with ❤ by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
How to keep Date’s microseconds precision in Swift
- Foundation
- iOS
- Swift
- ISO8601DateFormatter
DateFormatter is used for converting string representation of date and time to a Date type and visa-versa. Something to be aware of is that the conversion loses microseconds precision. This is extremely important if we use these Date values for sorting and therefore ending up with incorrect order. Let’s consider an iOS app which uses […]
DateFormatter is used for converting string representation of date and time to a Date type and visa-versa. Something to be aware of is that the conversion loses microseconds precision. This is extremely important if we use these Date values for sorting and therefore ending up with incorrect order. Let’s consider an iOS app which uses API for fetching a list of items and each of the item contains a timestamp used for sorting the list. Often, these timestamps have the ISO8601 format like 2024-09-21T10:32:32.113123Z. Foundation framework has a dedicated formatter for parsing these strings: ISO8601DateFormatter. It is simple to use: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let formatter = ISO8601DateFormatter() formatter.formatOptions = [.withInternetDateTime, .withFractionalSeconds] let date = formatter.date(from: "2024-09-21T10:32:32.113123Z") print(date?.timeIntervalSince1970) // 1726914752.113 view raw ISO8601.swift hosted with ❤ by GitHub Great, but there is on caveat, it ignores microseconds. Fortunately this can be fixed by manually parsing microseconds and adding the missing precision to the converted Date value. Here is an example, how to do this using an extension. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension ISO8601DateFormatter { func microsecondsDate(from dateString: String) -> Date? { guard let millisecondsDate = date(from: dateString) else { return nil } guard let fractionIndex = dateString.lastIndex(of: ".") else { return millisecondsDate } guard let tzIndex = dateString.lastIndex(of: "Z") else { return millisecondsDate } guard let startIndex = dateString.index(fractionIndex, offsetBy: 4, limitedBy: tzIndex) else { return millisecondsDate } // Pad the missing zeros at the end and cut off nanoseconds let microsecondsString = dateString[startIndex..<tzIndex].padding(toLength: 3, withPad: "0", startingAt: 0) guard let microseconds = TimeInterval(microsecondsString) else { return millisecondsDate } return Date(timeIntervalSince1970: millisecondsDate.timeIntervalSince1970 + microseconds / 1_000_000.0) } } view raw ISO8601.swift hosted with ❤ by GitHub That this code does is first converting the string using the original date(from:) method, followed by manually extracting digits for microseconds by handling cases where there are less than 3 digits or event there are nanoseconds present. Lastly a new Date value is created with the microseconds precision. Here are examples of the output (note that float’s precision comes into play). This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let dateStrings = [ "2024-09-21T10:32:32.113Z", "2024-09-21T10:32:32.1131Z", "2024-09-21T10:32:32.11312Z", "2024-09-21T10:32:32.113123Z", "2024-09-21T10:32:32.1131234Z", "2024-09-21T10:32:32.11312345Z", "2024-09-21T10:32:32.113123456Z" ] let dates = dateStrings.compactMap(formatter.microsecondsDate(from:)) for (string, date) in zip(dateStrings, dates) { print(string, "->", date.timeIntervalSince1970) } /* 2024-09-21T10:32:32.113Z -> 1726914752.113 2024-09-21T10:32:32.1131Z -> 1726914752.1130998 2024-09-21T10:32:32.11312Z -> 1726914752.1131198 2024-09-21T10:32:32.113123Z -> 1726914752.113123 2024-09-21T10:32:32.1131234Z -> 1726914752.113123 2024-09-21T10:32:32.11312345Z -> 1726914752.113123 2024-09-21T10:32:32.113123456Z -> 1726914752.113123 */ view raw ISO8601.swift hosted with ❤ by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Wrapping async-await with a completion handler in Swift
- Swift
- async
- iOS
It is not often when we need to wrap an async function with a completion handler. Typically, the reverse is what happens. This need can happen in codebases where the public interface can’t change just right now, but internally it is moving towards async-await functions. Let’s jump in and see how to wrap an async […]
It is not often when we need to wrap an async function with a completion handler. Typically, the reverse is what happens. This need can happen in codebases where the public interface can’t change just right now, but internally it is moving towards async-await functions. Let’s jump in and see how to wrap an async function, an async throwing function and an async throwing function what returns a value. To illustrate how to use it, we’ll see an example of how a PhotoEffectApplier type has a public interface consisting of completion handler based functions and how it internally uses PhotoProcessor type what only has async functions. The end result looks like this: This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct PhotoProcessor { func process(_ photo: Photo) async throws -> Photo { // … return Photo(name: UUID().uuidString) } func setConfiguration(_ configuration: Configuration) async throws { // … } func cancel() async { // … } } public final class PhotoEffectApplier { private let processor = PhotoProcessor() public func apply(effect: PhotoEffect, to photo: Photo, completion: @escaping (Result<Photo, Error>) -> Void) { Task(operation: { try await self.processor.process(photo) }, completion: completion) } public func setConfiguration(_ configuration: Configuration, completion: @escaping (Error?) -> Void) { Task(operation: { try await self.processor.setConfiguration(configuration) }, completion: completion) } public func cancel(completion: @escaping (Error?) -> Void) { Task(operation: { await self.processor.cancel() }, completion: completion) } } view raw PhotoEffectApplier.swift hosted with ❤ by GitHub In this example, we have all the interested function types covered: async, async throwing and async throwing with a return type. Great, but let’s have a look at these Task initializers what make this happen. The core idea is to create a Task, run an operation, and then make a completion handler callback. Since most of the time we need to run the completion on the main thread, then we have a queue argument with the default queue set to the main thread. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension Task { @discardableResult init<T>( priority: TaskPriority? = nil, operation: @escaping () async throws -> T, queue: DispatchQueue = .main, completion: @escaping (Result<T, Failure>) -> Void ) where Success == Void, Failure == any Error { self.init(priority: priority) { do { let value = try await operation() queue.async { completion(.success(value)) } } catch { queue.async { completion(.failure(error)) } } } } } view raw AsyncThrowsValue.swift hosted with ❤ by GitHub This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension Task { @discardableResult init( priority: TaskPriority? = nil, operation: @escaping () async throws -> Void, queue: DispatchQueue = .main, completion: @escaping (Error?) -> Void ) where Success == Void, Failure == any Error { self.init(priority: priority) { do { try await operation() queue.async { completion(nil) } } catch { queue.async { completion(error) } } } } } view raw AsyncThrows.swift hosted with ❤ by GitHub This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension Task { @discardableResult init( priority: TaskPriority? = nil, operation: @escaping () async -> Void, queue: DispatchQueue = .main, completion: @escaping () -> Void ) where Success == Void, Failure == Never { self.init(priority: priority) { await operation() queue.async { completion() } } } } view raw Async.swift hosted with ❤ by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Dark Augmented Code theme for Xcode
- Swift
- Xcode
After a couple of years, I tend to get tired of looking at the same colour scheme in Xcode. Then I spend quite a bit of time looking for a new theme and then coming back with empty hands. Material default has served me for a while, but it never felt like a perfect colour […]
After a couple of years, I tend to get tired of looking at the same colour scheme in Xcode. Then I spend quite a bit of time looking for a new theme and then coming back with empty hands. Material default has served me for a while, but it never felt like a perfect colour scheme for me. Therefore, I decided to take on a road of creating a new colour scheme on my own which is going to be named as “Augmented Code (Dark)”. It is available for Xcode and iTerm 2. Download it from here: GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee


Cancellable withObservationTracking in Swift
- iOS
- Swift
- SwiftUI
- observation
- withObservationTracking
Observation framework came out along with iOS 17 in 2023. Using this framework, we can make objects observable very easily. Please refer to @Observable macro in SwiftUI for quick recap if needed. It also has a function withObservationTracking(_:onChange:) what can be used for cases where we would want to manually get a callback when a tracked […]
Observation framework came out along with iOS 17 in 2023. Using this framework, we can make objects observable very easily. Please refer to @Observable macro in SwiftUI for quick recap if needed. It also has a function withObservationTracking(_:onChange:) what can be used for cases where we would want to manually get a callback when a tracked property is about to change. This function works as a one shot function and the onChange closure is called only once. Note that it is called before the value has actually changed. If we want to get the changed value, we would need to read the value on the next run loop cycle. It would be much more useful if we could use this function in a way where we could have an observation token and as long as it is set, the observation is active. Here is the function with cancellation support. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters func withObservationTracking( _ apply: @escaping () -> Void, token: @escaping () -> String?, willChange: (@Sendable () -> Void)? = nil, didChange: @escaping @Sendable () -> Void ) { withObservationTracking(apply) { guard token() != nil else { return } willChange?() RunLoop.current.perform { didChange() withObservationTracking( apply, token: token, willChange: willChange, didChange: didChange ) } } } view raw Observation.swift hosted with ❤ by GitHub The apply closure drives which values are being tracked, and this is passed into the existing withObservationTracking(_:onChange:) function. The token closure controls if the change should be handled and if we need to continue tracking. Will and did change are closures called before and after the value has changed. Here is a simple example where we have a view which controls if the observation should be active or not. Changing the value in the view model only triggers the print lines when observation token is set. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct ContentView: View { @State private var viewModel = ViewModel() @State private var observationToken: String? var body: some View { VStack { Text(viewModel.title) Button("Add") { viewModel.add() } Button("Start Observing") { guard observationToken == nil else { return } observationToken = UUID().uuidString observeAndPrint() } Button("Stop Observing") { observationToken = nil } } .padding() } func observeAndPrint() { withObservationTracking({ _ = viewModel.title }, token: { observationToken }, willChange: { [weak viewModel] in guard let viewModel else { return } print("will change \(viewModel.title)") }, didChange: { [weak viewModel] in guard let viewModel else { return } print("did change \(viewModel.title)") }) } } @Observable final class ViewModel { var counter = 0 func add() { counter += 1 } var title: String { "Number of items: \(counter)" } } view raw ContentView.swift hosted with ❤ by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Referencing itself in a struct in Swift
- Foundation
- iOS
- Swift
It took a long time, I mean years, but it finally happened. I stumbled on a struct which had a property of the same type. At first, it is kind of interesting that the replies property compiles fine, although it is a collection of the same type. I guess it is so because array’s storage […]
It took a long time, I mean years, but it finally happened. I stumbled on a struct which had a property of the same type. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int // This is OK: let replies: [Message] // This is not OK // Value type 'Message' cannot have a stored property that recursively contains it let parent: Message? } view raw Struct.swift hosted with ❤ by GitHub At first, it is kind of interesting that the replies property compiles fine, although it is a collection of the same type. I guess it is so because array’s storage type is a reference type. The simplest workaround is to use a closure for capturing the actual value. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int let replies: [Message] private let parentClosure: () -> Message? var parent: Message? { parentClosure() } } view raw Struct2.swift hosted with ❤ by GitHub Or we could go for using a boxed wrapper type. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int let replies: [Message] private let parentBoxed: Boxed<Message>? var parent: Message? { parentBoxed?.value} } class Boxed<T> { let value: T init(value: T) { self.value = value } } view raw Struct3.swift hosted with ❤ by GitHub Or if we prefer property wrappers, using that instead. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int let replies: [Message] @Boxed var parent: Message? } @propertyWrapper class Boxed<Value> { var value: Value init(wrappedValue: Value) { value = wrappedValue } var wrappedValue: Value { get { value } set { value = newValue } } } view raw Struct4.swift hosted with ❤ by GitHub Then there are also options like changing the struct into class instead, but that is something to consider. Or finally, creating a All in all, it is fascinating how something simple like this actually has a pretty complex background. If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
ScrollView phase changes on iOS 18
- Swift
- SwiftUI
- iOS
- onScrollPhaseChange
- ScrollGeometry
- ScrollPhase
- ScrollPhaseChangeContext
- ScrollView
In addition to scroll related view modifiers covered in the previous blog post, there is another one for detecting scroll view phases aka the state of the scrolling. The new view modifier is called onScrollPhaseChange(_:) and has three arguments in the change closure: old phase, new phase and a context. ScrollPhase is an enum with […]
In addition to scroll related view modifiers covered in the previous blog post, there is another one for detecting scroll view phases aka the state of the scrolling. The new view modifier is called onScrollPhaseChange(_:) and has three arguments in the change closure: old phase, new phase and a context. ScrollPhase is an enum with the following values: animating – animating the content offset decelerating – user interaction stopped and scroll velocity is decelerating idle – no scrolling interacting – user is interacting tracking – potential user initiated scroll event is going to happen The enum has a convenience property of isScrolling which is true when the phase is not idle. ScrollPhaseChangeContext captures additional information about the scroll state, and it is the third argument of the closure. The type gives access to the current ScrollGeometry and the velocity of the scroll view. Here is an example of a scroll view which has the new view modifier attached. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct ContentView: View { @State private var scrollState: ( phase: ScrollPhase, context: ScrollPhaseChangeContext )? let data = (0..<100).map({ "Item \($0)" }) var body: some View { NavigationStack { ScrollView { ForEach(data, id: \.self) { item in Text(item) .frame(maxWidth: .infinity) .padding() .background { RoundedRectangle(cornerRadius: 8) .fill(Color.cyan) } .padding(.horizontal, 8) } } .onScrollPhaseChange { oldPhase, newPhase, context in scrollState = (newPhase, context) } Divider() VStack { Text(scrollStateDescription) } .font(.footnote.monospaced()) .padding() } } private var scrollStateDescription: String { guard let scrollState else { return "" } let velocity: String = { guard let velocity = scrollState.context.velocity else { return "none" } return "\(velocity)" }() let geometry = scrollState.context.geometry return """ State at the scroll phase change Scrolling=\(scrollState.phase.isScrolling) Phase=\(scrollState.phase) Velocity \(velocity) Content offset \(geometry.contentOffset) Visible rect \(geometry.visibleRect.integral) """ } } view raw ScrollPhase.swift hosted with ❤ by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Recent content on Benoit Pasquier
From Engineer to Manager: A Year of Growth and Transformation
It feels like it was yesterday when I became an engineering manager but it has been almost a year. I want to take this time to reflect on the challenges and learnings from this journey.
Things to know before becoming an Engineering Manager
The journey from individual contributor to engineering manager isn’t always straightforward. Today, I’ll share what it means to become an engineering manager from my point of view, and a few important points to be aware of before making this transition.
Transitioning to an Engineering Manager role
It’s been a while since I haven’t posted anything on my website, it’s because there have been a few changes in 2022 that kept me away from writing. It’s time to resume it.
Security Application Static Analysis applied to iOS and Gitlab CI
Security is a big topic in software engineering but how does it apply to mobile development? We care about user experience or mobile performance, security issues are rarely prioritized. This week, I’ll share how to integrate security tools into your CI pipeline to stay aware of your codebase health.
Being more efficient as a mobile engineer
I was reading this week about “10x engineer” and what it means in the tech industry. If the title can be questionable, I wanted to reflect on its definition and what it can mean in mobile engineering.
When to remove your iOS app from the App Store
For most mobile engineers, the end game is to release our own apps. For the few projects that make it to the App Store, it can be pretty hard to keep them alive over time. Eventually, the question comes up: should I remove my app from the App Store? Today, I’ll share about the thought process that makes me sunset one.
Weak self, a story about memory management and closure in Swift
Memory management is a big topic in Swift and iOS development. If there are plenty of tutorials explaining when to use weak self
with closure, here is a short story when memory leaks can still happen with it.
Setting up Auto Layout constraints programmatically in Swift
In iOS development, content alignment and spacing is something that can take a lot of our time. Today, let’s explore how to set constraint with UIKit, update them and resolve constraint conflicts.
Ten years of blogging, one article at a time
Most of people don’t know but I’ve been blogging for some time now. Actually, tomorrow will be ten years. Today is a good time to take a walk on memory lane.
Deep linking and URL scheme in iOS
Opening an app from an URL is such a powerful iOS feature. Its drives users to your app, and can create shortcuts to specific features. This week, we’ll dive into deep linking on iOS and how to create an URL scheme for your app.
Tips and tweaks to integrate Github Action to your iOS project
I’ve been exploring more and more tooling around iOS ecosystem. One tool I really enjoy using those days is Github Action as a continuous integration for my projects. Today we’ll dive into tips and tweaks to make the most of it.
Flutter and fastlane, how to setup an iOS continuous delivery solution
When it comes to iOS development, everybody have their own favorite language and framework: Swift, Objective-C, SwiftUI, React-Native, Flutter and so on. Unlike most of my previous post, today we’re going to leverage some iOS tooling for cross platforms technology: fastlane and Flutter.
Currency TextField in SwiftUI
Between banking and crypto apps, it’s quite often we interact with currency inputs on daily basis. If creating a localized UITextField
can already be tricky in UIKit, I was wondering how hard it would be to do a similar one in SwiftUI. Let’s see today how to create a localized currency TextField
in SwiftUI.
Open Source checklist for your next Swift library
Like many developers, I use open source tools on daily basis. Recently, I’ve got the chance to create one for other teammates and try to think about what I should consider before launching it. Today I share this checklist.
Unit testing UIView action and gesture in Swift
A big part of the developer journey is make sure our code behaves as expected. It’s best practice to setup tests that allow us to test quickly and often that nothing is broken. If unit testing is common practice to check the business logic, we can also extend it to cover some specific UI behaviors. Let’s how to unit test views and gesture in UIKit.
Dependency injection and Generics to create a modular app in Swift
When we talk about modular app, we rarely mention how complex it can be over time and get out of hand. In most cases, importing frameworks into one another is a reasonable solution but we can do more. Let’s explore how with dependency inversion in Swift and how to create order into our components.
Things I wish I knew in my early coding career
For the past few years, I had the opportunity to mentor new joiners through different roles. In some aspects, I could see myself in them the same way I started years back: eager to prove themselves, jumping on the code and hacking around.
I tried to think about what I learnt the hard way since my first role in the tech industry and how could I help them learn the easy way.
Create a web browser with WebKit and SwiftUI
Recently, I’ve been more and more curious about web experience through mobile apps. Most of web browser apps look alike, I was wondering how could I recreate one with WebKit and SwiftUI. Let’s dive in.
Migrating an iOS app to SwiftUI - Database with Realm
To move an existing iOS app codebase to SwiftUI can quickly become a challenge if we don’t scope the difficulties ahead. After covering the navigation and design layer last week, it’s time to dive deeper into the logic and handle the code migration for a database and the user preferences.
Migrating an iOS app to SwiftUI - Navigation & Storyboards
If SwiftUI is great for many things, migrating completely an existing app codebase to it can be really tricky. In a series of blog posts, I’ll share how to migrate an iOS app written in Swift with UIKit to SwiftUI. Today, let’s start with the navigation and the UI components with storyboards.
Creating a webcam utility app for macOS in SwiftUI
Did you ever have to share your screen and camera together? I recently did and it was that easy. How hard could it be to create our own? Today, we’ll code our own webcam utility app for macOS in SwiftUI.
Migrating MVVM architecture from RxSwift to Combine
It’s been almost two years that Combine has been introduced to the Apple developer community. As many developer, you want to migrate your codebase to it. You don’t want to be left behind but you’re not sure where to start, maybe not sure if you want to jump to SwiftUI either. Nothing to worry, let’s see step by step how to migrate an iOS sample app using UIKit and RxSwift to Combine.
How to display date and time in SwiftUI
Displaying dates or times is a very common requirement for many apps, often using a specific date formatter. Let’s see what SwiftUI brings to the table to make it easier for developers.
Create a dynamic onboarding UI in Swift
When creating new features, it’s really important to think about how our users will use it. Most of the time, the UI is straightforward enough. However, sometimes, you will want to give some guidance, to highlight a button or a switch, with a message attached. Today, we’ll create a reusable and adaptable overlay in Swift to help onboard mobile users for any of your features.
Goodbye 2020 - A year in perspective
Close to the end of the year, I tend to list what I’ve accomplished but also what didn’t go so well, to help me see what can I do better next year. With couple days early, it’s time to look back at 2020.
How to pass data between views using Coordinator pattern in Swift
A question that comes back often when using Coordinator pattern in iOS development is how to pass data between views. Today I’ll share different approaches for a same solution, regardless if you are using MVVM, MVC or other architectural design pattern.
Automating App Store localized screenshots with XCTest and Xcode Test Plan
One reason I like so much working on native mobile apps is to deliver the user experience based on their region and location. Although, for every update, it can be painful for developers to recapture screenshots foreach available language. Today, I’ll share how to automate this with UI tests and Xcode tools.
Playing Video with AVPlayer in SwiftUI
I’ve been experiencing more and more with SwiftUI and I really wanted to see what we can do with video content. Today I’ll share my findings, showing how to play video using AVFoundation
in SwiftUI, including some mistakes to avoid.
With Catalyst and SwiftUI multi-platform, should you create a macOS version of your app?
With Mac Catalyst and SwiftUI support for macOS, Apple has been pushing new tools to the community for the past couple years to create new services on Mac computers. Does it mean you should do too? Here are couple things to consider first.
Create a watchOS app in SwiftUI
Designing a watchOS app in Swift always felt to be quite tricky. I could spend hours tweaking redoing layout and constraints. With SwiftUI supporting watchOS, I wanted to have a new try at it, releasing a standalone app for Apple Watch.
As software engineer, how to face the impostor syndrome?
Shortly stepping back from coding for a week and reading about the community, I realized it how easy it is to be crushed by anxiety: I see so many great things happening every day, things I want to be part of, but at the same time getting anxiety to be good enough. This is my thoughts of how to face the impostor syndrome.
Advanced testing tips in Xcode
In the last couple years, Apple has made some good efforts to improve their testing tools. Today, I’ll walk you through some tips to make sure your test suite run at their best capacity.
Atomic properties and Thread-safe data structure in Swift
A recurring challenge in programming is accessing a shared resource concurrently. How to make sure the code doesn’t behave differently when multiple thread or operations tries to access the same property. In short, how to protect from a race condition?
Deploying your Swift code on AWS Lambda
About a month ago, it became possible to run Swift code on AWS Lambda. I was really interesting to try and see how easy it would be to deploy small Swift functions as serverless application. Let’s see how.
Introduction to MVVM pattern in Objective-C
Even though the iOS ecosystem is growing further every day from Objective-C, some companies still heavily rely on it. A week away for another wave of innovation from WWDC 2020, I thought it would be interesting to dive back into Objective-C starting with a MVVM pattern implementation.
100 day challenge of data structure and algorithm in Swift
Since January, I’ve been slowing down blogging for couple reasons: I started doubting about myself and the quality of my content but I also wanted to focus more on some fundamentals I felt I was missing. So I committed to a “100 day challenge” coding challenge, focused on data structure and algorithm in Swift.
Data Structure - Implementing a Tree in Swift
Following up previous articles about common data structure in Swift, this week it’s time to cover the Tree, a very important concept that we use everyday in iOS development. Let’s dive in.
Using Key-Value Observing in Swift to debug your app
Recently, I was looking into a bug where the UITabBar was inconsistently disappearing on specific pages. I tried different approaches but I couldn’t get where it got displayed and hidden. That’s where I thought about KVO.
Data Structure - Coding a Stack in Swift
After covering last week how to code a Queue in Swift, it sounds natural to move on to the Stack, another really handy data structure which also find his place in iOS development. Let’s see why.
Data Structure - How to implement a Queue in Swift
Recently revisiting computer science fundamentals, I was interested to see how specific data structure applies to iOS development, starting this week one of most common data structure: the queue.
Should I quit blogging?
When I started this blog in 2012, it was at first to share solution to technical problem I encountered on my daily work, to give back to the community. Over the years, I extended the content to other projects and ideas I had. Nowadays, I get more and more feedbacks on it, sometimes good, sometimes bad, either way something always good to learn from.
Start your A/B testing journey with SwiftUI
Last year, I shared a solution to tackle A/B testing on iOS in swift. Now that we have SwiftUI, I want to see if there is a better way to implement A/B testing. Starting from the same idea, I’ll share different implementations to find the best one.
How to make your iOS app smarter with sentiment analysis
For quite some time now, I’ve been developing an interest to data analysis to find new ways to improve mobile app. I’ve recently found some time to experiment neural language processing for a very specific usecase related to my daily work, sentiment analysis of customer reviews on fashion items.
Localization with SwiftUI, how to preview your localized content
With SwiftUI being recently introduced, I was curious if we could take advantage of SwiftUI preview to speed up testing localization and make sure your app looks great for any language.
SwiftUI - What has changed in your MVVM pattern implementation
Introduced in 2019, Apple made UI implementation much simpler with With SwiftUI its UI declarative framework. After some time experiencing with it, I’m wondering today if MVVM is still the best pattern to use with. Let’s see what has changed, implementing MVVM with SwiftUI.
Data Structure and Algorithm applied to iOS
When asked about data structure and algorithm for an iOS development role, there is always this idea that it’s not a knowledge needed. Swift got already native data structure, right? Isn’t the rest only UI components? That’s definitely not true. Let’s step back and discuss about data structure and algorithm applied to iOS development.
How to integrate Redux in your MVVM architecture
For last couple years, I’ve been experimenting different architectures to understand pros and cons of each one of them. Redux architecture is definitely one that peek my curiosity. In this new post, I’ll share my finding pairing Redux with MVVM, another pattern I’m familiar with and more importantly why you probably shouldn’t pair them.
Software engineer, it's okay to not have a side project
There is a believe that any software developer must contribute or have a side project to work on. Even if it’s great to have, I think there is something bigger at stake doing that.
How to build a modular architecture in iOS
Over time, any code base grows along with the project evolves and matures. It creates two main constraints for developers: how to have a code well organized while keeping a build time as low as possible. Let’s see how a modular architecture can fix that.
Analytics - How to avoid common mistakes in iOS
I have been interested in analytics tools for a while, especially when it’s applied to mobile development. Over the time, I saw many code mistakes when implementing an analytical solution. Some of them can be easily avoided when developer got the right insights, let’s see how.
Apps and Projects
Over the time, I spent quite some time building different apps and projects. Here is the list of the one that became something. Lighthouse is a webapp written in Swift to test universal link configuration. Driiing, a running companion app to signal runners coming to pedestrians. Appy, an iOS app that takes helps you quit your bad habit. Square is an resizing tool for app icons written in Rust. Japan Direct, an itinerary app for iOS to visit Japan like a local.
Events and Talks
I recently tried to be more active in the iOS community. Becoming speaker and talks to events is my next challenged. Here is the list of talks I’ve made so far. My very first one was recently at iOS meetup Singapore in July 2019, talking about scalability of an iOS app along with your team. You can read more about this whole new journey here. I also got chance to be part of iOS Conf SG 2021, an online version of the very popular international event iOS Conf SG.
Code Coverage in Xcode - How to avoid a vanity metric for your iOS app
Since Xcode 7, iOS developers can generate a code coverage for their app: a report showing which area of their app is covered by unit tests. However, this is isn’t always accurate, let’s see why you should not base your code health only on code coverage.
Appy, an iOS app to help you quit your bad habits
It has been a while since I wanted to create something helpful to others, not than just another random app. Then I found out there were not so many great sobriety apps, so I launched one. Here is Appy, to help you quit your bad habits.
How to integrate Sign In with Apple in your iOS app
With iOS13, Apple is introducing “Sign In with Apple”, an authentication system that allows user create an account for your app based on their Apple ID. Let’s see how to integrate it in your app and be ready for iOS13 launch.
How to avoid common mistakes for your first iOS talk
I have been a bit more quite for the past couple weeks to take a break of my weekly routine of blogging. It’s not because I was lazy, but I wanted to take time to digest WWDC. At the same time I had other running projects, one was my first talk at an iOS meetup. Here is couple tips I would have love to hear earlier.
First steps in functional reactive programming in Swift with Apple Combine framework
One debate over the past year in the iOS ecosystem was the around functional reactive framework like RxSwift or ReactiveCocoa. This year at WWDC2019, Apple took position on it and released their own functional reactive programming framework, here is Combine.
iOS Code Review - Health check of your Swift code
I have been recently asked to review an iOS application to see how healthy was the code base, if it follows the best practices and how easy it would be to add new features to it. If I review some code on daily basis for small pull requests, analyzing one whole app at once is quite different exercise. Here is some guidelines to help doing that analysis.
How to implement Coordinator pattern with RxSwift
After weeks experimenting different patterns and code structures, I wanted to go further in functional reactive programming and see how to take advantage of it while following Coordinator pattern. This post describes how integrate RxSwift with Coordinator pattern and which mistakes to avoid.
ReSwift - Introduction to Redux architecture in Swift
If you are not familiar with it, Redux a Javascript open source library designed to manage web application states. It helps a lot to make sure your app always behaves as expected and makes your code easier to test. ReSwift is the same concept but in Swift. Let’s see how.
Tools and tips to scale your iOS project along with your team
We often talk about scalability of iOS app but not much about the project itself or the team. How to prepare your project to move from 2 developers to 6? How about 10 or 20 more? In that research, I’ve listed different tools to prepare your team and project to scale.
RxSwift & MVVM - Advanced concepts of UITableView with RxDataSources
For the past months, I keep going further in RxSwift usage. I really like the idea of forwarding events through different layers but the user interface stays sometimes a challenge. Today, I’ll describe how to use RxDataSources to keep things as easy as possible.
How to use Vapor Server to write stronger UI tests in Swift
Even if I usually stay focus on the customer facing side of mobile development, I like the idea of writing backend api with all the security that Swift includes. Starting small, why not using Swift Server for our UI Tests to mock content and be at the closest of the real app.
How to bootstrap your iOS app to iterate faster
I love developing new iOS apps and create new products. However, regardless of the project, it often need a team to mix the required skills: design, coding, marketing. Although, this less and less true, so let’s see how to bootstrap your iOS app.
RxSwift & MVVM - How to use RxTests to test your ViewModel
Not that long ago, I wrote how to pair RxSwift with MVVM architecture in an iOS project. Even if I refactored my code to be reactive, I omitted to mention the unit tests. Today I’ll show step by step how to use RxTest to unit test your code.
Down the rabbit hole of iOS design patterns
For years now, the whole iOS community has written content about the best way to improve or replace the Apple MVC we all started with, myself included. MVC, MVVM, MVP, VIPER? Regardless the type of snake you have chosen, it’s time to reflect on that journey.
Coordinator & MVVM - Clean Navigation and Back Button in Swift
After introducing how to implement Coordinator pattern with an MVVM structure, it feels natural for me to go further and cover some of the blank spots of Coordinator and how to fix along the way.
Reversi - An elegant A/B testing framework for iOS in Swift.
Couple weeks ago, I heard somebody talking about A/B testing in iOS and how “mobile native A/B testing is hard to implement”. It didn’t sound right to me. So I build a tiny framework for that in Swift. Here is Reversi.
Dos and Don’ts for creating an onboarding journey on iOS
I was recently searching for onboarding journey in iOS, that succession of screens displayed at the first launch of a freshly installed mobile app. But regardless how beautiful the design can be, why so many people are tempted to skip it. I listed things to consider while creating an onboarding journey for your iOS app.
Introduction to Coordinator pattern in Swift
After some times creating different iOS apps following an MVVM pattern, I’m often not sure how to implement the navigation. If the View handles the rendering and user’s interactions and the ViewModel the service or business logic, where does the navigation sit? That’s where Coordinator pattern takes place.
How to create a customer focused mobile app
Last year, I launched with a friend Japan Direct, an itinerary app for Japan travellers. Even if the first version came up quite quickly, I kept iterate but always staying focus on customer feedback first. Almost a year later, it’s good time for synthesis, see what worked and how we created a customer focused app.
Adaptive Layout and UICollectionView in Swift
Apple introduced in iOS8 trait variations that let developers create more adaptive design for their mobile apps, reducing code complexity and avoiding duplicated code between devices. But how to take advantage of variations for UICollectionView?
This post will cover how to setup variations via Interface Builder as well but also programatically, using AutoLayout and UITraitVariation with a UICollectionView to create a unique adaptive design.
RxSwift & MVVM - An alternative structure for your ViewModel
For last couple weeks, I’ve worked a lot about how to integrate RxSwift into an iOS project but I wasn’t fully satisfied with the view model. After reading many documentation and trying on my side, I’ve finally found a structure I’m happy with.
Create a machine learning model to classify Fashion images in Swift
Since WWDC18, Apple made it way easier to developers to create model for machine learning to integrate iOS apps. I have tried myself in the past different models, one for face detection and create another with Tensorflow to fashion classification during a hackathon. Today I’ll share with you how I create a model dedicated to fashion brands.
How to integrate RxSwift in your MVVM architecture
It took me quite some time to get into Reactive Programming and its variant adapted for iOS development with RxSwift and RxCocoa. However, being fan of MVVM architecture and using an observer design pattern with it, it was natural for me to revisit my approach and use RxSwift instead. Thats what I’m going to cover in this post.
Design pattern in Swift - Delegation
The delegation pattern is one of the most common design pattern in iOS. You probably use it on daily basis without noticing, every time you create a UITableView or UICollectionView and implementing their delegates. Let’s see how it works and how to implement it in Swift.
UI testing - How to inspect your iOS app with Calabash and Appium
Part of the journey in software development is testability. Regarding mobile development, testability for your iOS app goes through UI testing. Let’s see different way to inspect any UI elements and prepare your iOS app for UI automation testing.
Don't forget what you've accomplished this year
While wishing a happy new year around me, people helped me realised how many good things happened to me this year. Funny enough, while listing my goals for 2019, I found the matching list for 2018 and here is what really happened.
Develop your creativity with ephemeral iOS apps
From my first year studying computer science, I’ve always wanted to do more on my free time and create simple projects that could be useful for others. I won’t lie, I wish I was able to monetize them but regardless the outcome, learning was always part of the journey.
Design pattern in Swift - Observers
During this year, I have blogged quite a bit about code architecture in Swift and I’ve realized that I didn’t explain much about which design pattern to use with it. In a series of coming posts, I will cover different design patterns, starting now with observer.
Build a visual search app with TensorFlow in less than 24 hours
For a while now, I really wanted to work on a machine learning project, especially since Apple let you import trained model in your iOS app now. Last September, I took part of a 24h hackathon for an e-commerce business, that was my chance to test it. The idea was simple: a visual search app, listing similar products based on a picture.
Always keep your skills sharp
It has been couple months since my last post and despite the idea, a lot of things kept me busy far from blogging. Looking back, it all articulates around the same idea: why it’s important to always keep your skills sharp.
How to detect if your iOS app hits product market fit
Couple months ago, I’ve built an app and released it on the App Store. Since published, I really wanted to see how it lives and understand how to make it grow. Ideally, I wanted to know if there is a product / market fit. In the article, I describe each steps and ideas that helped my app grow and what I learnt from it.
The best way to encode and decode JSON in Swift4
Most of mobile apps interact at some point with remote services, fetching data from an api, submitting a form… Let’s see how to use Codable in Swift to easily encode objects and decode JSON in couple lines of codes.
Why choosing XCUITest framework over Appium for UI automation testing
I recently went for a Swift conference and UI automation testing was one of the subject. I already mentioned it with Appium in the past but I think it’s time to go back to it and explain why today I still prefer using Apple’s testing framework instead.
Why and how to add home screen shortcut for your iOS app
I recently implemented 3D touch for an app and I was very interested about home screen quick actions. If it can be a good way to improve user experience, it doesn’t mean your app always needs it. In this article, I explain how to add home screen shortcut for your app in Swift but mostly why can justify implementing it.
What I learn from six years of blogging
I recently realised that my first blog post was 6 years ago. It’s a good occasion for me to do a little retrospective and share what I learnt from blogging over the years.
Error handling in MVVM architecture in Swift
If you care about user experience, error handling is a big part you have to cover. We can design how an mobile app looks like when it works, but what happen when something goes wrong. Should we display an alert to the user? Can the error stay silent? And mostly how to implement it the best way with your current design pattern? Let’s see our options while following MVVM pattern.
From the idea of an iOS app to App Store in 10 hours
The best way to learn and become more creative as a developer is to focus on a side project. A really good friend coming back from Japan came to me with an idea when I needed that side project. This is how we created Japan Direct, from the idea to the App Store in almost no time.
How to optimise your UICollectionView implementation in Swift
For the last couple weeks, I tried to step back on my development to analyse what is time consuming in mobile development. I realised that most of new views are based on same approach, reimplementing an similar structure around a UICollectionView or UITableView.
What if I can have a more generic approach where I can focus only on what matters, the user experience. That’s what I tried to explore in this article.
Support universal links in your iOS app
Last couple weeks, I have traveled with only my iPhone with me and I realised how many apps I daily used still relying on their websites. Even with the right iOS app installed, I had to browse on Safari app to get specific details. That is why it’s so important to support universal links in iOS. Let me show you how.
Make the most of enumerations in Swift
Enumerations have changed a lot between Objective-C and Swift. We can easily forget how useful and powerful it can. I wanted to get back to it through simple examples to make the most of it.
How to integrate Firebase in your iOS app
Firebase is a set of tools introduced by Google to build better mobile apps. I worked with this many times and even if it’s straight forward to integrate, here are couple advices of implementation to make the most of it.
From lean programming to growth marketing
I recently followed a growth marketing course, introducing mindset and methodology to make a company grow. I learnt a lot from it and since, I try to apply this knowledge on a daily basis. After more reflection on it, a lot of ideas looked very similar to software development job, this is the part I would like to share.
Introduction to Protocol-Oriented Programming in Swift
When I started coding years ago, it was all about object oriented programming. With Swift, a new approach came up, making the code even easier to reuse and to test, Protocol-Oriented Programming.
Why you should abstract any iOS third party libraries
If you have an iOS app, you might have integrated external libraries and tools to help you getting your product ready faster. However your iOS architecture and swift code shouldn’t depend on those libraries.
Optimise Xcode build to speed Fastlane
The best part of continuous integration is the ability to automatically run tests and build apps, ready to be deployed. However, automatic build doesn’t mean smart or optimised build. Here are some tips I collected along the way to speed up delivery process.
Unit Testing your MVVM architecture in Swift
To be sure new code won’t break old one already implemented, it’s best practice to write unit tests. When it comes to app architectures, it can be a challenge to write those tests. Following an MVVM pattern, how to unit test a view and its viewModel? That’s what I would like to cover here using dependency injection.
How to implement MVVM pattern in Swift from scratch
Creating a new app often raise the question of what architecture to choose, which pattern would fit best. In this post, I show how to implement an MVVM pattern around a sample app in Swift.
Kronos, an iOS app to make runners love numbers
In 2017, I managed to run about 750 miles (1200 km), it’s 250 miles more than the year before. I know it because Strava tracked it for me. I’m such a fan of their product than using it becomes part of my routine and my training. Although, during that journey, I always missed numbers that talked to me. That is how I created Kronos.
Starting your year the right way
Starting a new year is always exciting. Most of us have new resolutions and a bucket list we want to accomplish for 2018 but it’s quite often that as soon something go wrong, the whole list goes wrong. Here is some advices to keep track on it.
Do you need a Today extension for your iOS app?
For the last couple months, I observed Today extensions of some of iOS apps I daily use to see when those widgets are useful and how to justify developing one. Here are my conclusions.
Face detection in iOS with Core ML and Vision in Swift
With iOS11, Apple introduced the ability to integrate machine learning into mobile apps with Core ML. As promising as it sounds, it also has some limitations, let’s discover it around a face detection sample app.
Making five years in three
I always thought a good way to stay motivated and look forward is to have goal you can accomplish in a short term, about 3 to 12 months maximum. It’s at least the way I dealt with my life after being graduated.
How to use Javascript with WKWebView in Swift
Embedding web into native apps is a frequent approach to quickly add content into a mobile app. It can be for a contact form but also for more complex content to bootstrap a missing native feature. But you can go further and build a two bridge between Web and Mobile using JavaScript and Swift.
Using Charles as SSL Proxy on iOS
Most of apps use HTTPS request to access data, and because of SSL encryption, it can be tough to debug it from iOS apps that are already on the App Store. Charles is the perfect tool to help you inspect your HTTPS requests.
Create your private CocoaPod library
Libraries and external dependencies have always been a good way to avoid developers recreate something already existing. It’s also a good way to help each other and leaving something reusable. CocoaPods is the most used tool to manage dependencies around Xcode projects. Let’s see how to create your own private pod.
How to be what you want to be
Starting 2017, I decided that this year would be mine. It doesn’t mean everything would be given, but I would stay open to new opportunities and stay actor of my life, be what I want to be. Half way, here is time for reflection.
Build your Android app with Bitbucket Pipeline and HockeyApp
Configuring a continuous integration can be tricky for mobile apps. Let’s see how quick it is to build an Android app with Bitbucket Pipeline and deliver it with App Center app (ex HockeyApp).
How to migrate from WordPress to a static website with Hugo and AWS
Recently, I got a reminder that my domain name and shared host would eventually expire this summer. I always had a WordPress for my website and thought it was time to move on for something easier to maintain. Here is how I managed to migrate my WordPress blog to a static website with Hugo on AWS.
10 weeks training with running mobile apps
This year, I finally signed up for a marathon and the way I use running apps and their services have clearly changed. Giving the best user experience around those services is essential to make the app useful. Here is my feedback as a mobile developer during my last 10 weeks training.
French Election 2017, don't get fooled by surveys
Technology has never been as important as today in politics. Everything is related to numeric data. If we only analyze news around US elections in 2016, it was mostly about email hacks, fake news in daily news feed, or online surveys. Concerned about French elections 2017, I wanted to be a bit more active and do something related the last one: to online surveys.
Six months of Android development
In my current role at Qudini, I started as an iOS developer. My main task was to create and improve our mobile products for iOS devices based on what was already done on Android. However I wanted to be more efficient in my job and I thought it could be by impacting more users through Android development. Once our iOS apps were at the same level as the Android one, I push the idea that it would be better I start doing Android too. Here is my feedback after 6 months developing on Android.
Feature flag your mobile app with Apptimize
Recently, I got the chance to integrate feature flags into a mobile app I work on. The idea of feature flag is simple, it lets you enable and manage features in your mobile app remotely without requiring a new release. Let see the benefice of it and how integrate a feature flag solution like Apptimize’s one.
Xcode script automation for SauceLabs
Couple months ago, I’ve tried to set a mobile testing environment with Appium and one of the best tools to execute these tests was SauceLabs, a cloud platform dedicated for testing. SauceLabs is pretty easy to use but here is couple tricks to make even easier.
Mobile continuous delivery with bitrise
Continuous integration and continuous delivery is something I wanted to do a while ago, specially since Apple accelerated its approval process to publish new apps on its mobile store. It can now takes less than a day to have an update available for your mobile users: continuous integration and continuous delivery makes more sense than ever on mobile apps.
How can a developer do marketing?
Working as a mobile developer, I created multiple apps during last couple years for companies I worked for, and eventually for personal projects. At the beginning, I though the goal for any developer was the release itself: shipping code and moving on, but I quickly found out that it was more frustrating than everything to stop here. That’s how I started thinking about what should be the next step and if a developer can actually do marketing and how.
Growth Hacking applied to your LinkedIn profile to get a new job
I recently finished Growth Hacking Marketing by Ryan Holiday and learn a lot of things about it. Some of them remembered me the way I found my job in London and how I tweaked my LinkedIn profile to fit the targeted audience.
How to create an iOS app for Sens'it tracker in Swift
Sens’it is small tracker developed by Sigfox and given for free during events to let people test the Sigfox low frequency IoT network. Let’s see how to create an iOS app in Swift based on Sens’it api.
How to keep your privacy in mobile apps
Couple years ago, I worked on a mobile app linked to video and audio recording. I quickly see that, once the user agreed for permissions, it can be easy to track personal data without user noticed it. Let see how limit mobile app permissions to maintain user privacy.
Appium, when automation testing can be randomly wrong
Appium is an UI automation testing framework, helping developers to automatically test their app. This tool can be really powerful but my experience with it let me think it’s not enough accurate to be used everyday and at its full potential.
UI Automation testing on iOS9
During WWDC2015, Apple announced big stuff, but they also released awesome features for developers. One of them was dedicated to UI Testing. Working around UI Automation test, I’ve just discovered last Xcode 7 and how life is going to be easier with their last feature for that.
How to work with native iOS and javascript callbacks in Objective-C
Recently I worked on a small iOS mobile project around Javascript. I wanted to load web content from iOS with Javascript inside and get callbacks from Javascript into iOS, to save native data and transmit it to an other controller if needed. The second part was also to call Javascript methods from iOS part.
AmbiMac, an app creating your own ambilight
Philips created few years ago Ambilight, a TV with a dynamic lights on it back. With two friends, we wanted to design an app with a similar function based on connected light bulb during an hackathon. Here is what we have done in 24h hours of code, let’s meet AmbiMac.
Introduction to sleep analysis with HealthKit with Swift
HealthKit is a powerful tool if you want to create an iOS mobile app based on health data. However, it’s not only for body measurements, fitness or nutrition; it’s also sleep analysis. In this HealthKit tutorial, I will show you how to read and write some sleep data and save them in Health app.
UPDATE - April 2020: Originally written for Swift 1.0, then 2.0, I’ve updated this post for latest Swift 5.1 version and Xcode 11.3.
Dynamic url rewriting in CodeIgniter
I work with CodeIgniter almost exclusively on API, but sometimes it can help on short-lived websites. Rewrite url is a good thing to know if you want to optimize SEO for your key pages of a website. That’s what I want to show you and how it’s easy to set it up.
Le métier de développeur dans les objets connectés
Pour la fin de mes études, j’ai choisi de rédiger mon mémoire sur les objets connectés et plus précisément sur le développement de services numériques autour de ces objets. Ce travail de fond m’a permis de prendre du recul sur mon travail mais c’était aussi l’occasion de trouver une définition de ce qu’est un développeur d’objet connecté.
Majordhome, le projet né durant un startup weekend
En Octobre dernier, j’avais travaillé sur le cocktailMaker, un objet connecté facilitant la création de cocktails. Voulant pousser le concept un peu plus loin, je me suis inscrit au startup weekend de Novembre organisé à l’EM Lyon pour découvrir les aspects marketing et business qui me manque aujourd’hui. Retour sur ces 54h de travail acharné.
Les difficultés autour des objets connectés
Ces temps ci, il y a beaucoup de bruits autour des objets connectés. Tous les jours, on découvre de nouveaux articles sur des objets connectés annoncés sur le marché ou financés sur des plateformes de “crowdfunding”. On a bien moins d’informations sur toutes les difficultés liées autour de ces projets innovants. Voici mes conclusions sur les recherches que j’ai faites à ce sujet.
CocktailMaker, l'objet connecté 100% hackathon
L’année dernière à cette même période, j’ai participé au Fhacktory, ce hackathon nouvelle génération né à Lyon, avec une application mobile dédiée à la chute libre. Cette année, j’ai pu à nouveau monter sur le podium de cet évènement en développement un objet connecté, le CocktailMaker. Retour sur ce week-end 100% hack.
Comment Jawbone s'adapte à l'Internet des Choses
Sur la place des objets connectés, Jawbone est rapidement devenu un pilier du “quantified-self” (auto-mesure) avec ses bracelets UP et UP24. Je vous propose un décryptage des leurs dernières évolutions afin de rester à la pointe du “wearable”.
Moto360 ou Withings Activité
De plus en plus de montres connectées font leur apparition, mais d’après moi, la plupart passe à côté de l’essentiel: la montre reste l’un des seuls accessoires masculin, il faut donc la rendre élégante en respectant sa forme historique. C’est pourquoi, je m’intéresse dans cet article principalement aux montres “habillées” et en attendant la sortie de celle d’Apple, je vous propose un comparatif entre la montre connectée de Motorola et celle de Withings, fraichement annoncée.
Mes premiers pas vers le Lean Startup
Ne voulant pas me limiter à mon background technique, j’essaie de plus en plus de développer des notions d’entrepreneuriat dans l’idée d’être plus utile dans mon analyse technique et de continuer la reflexion autour de différents développement d’applications dans une start-up. L’idée est de ne pas se limiter au développement demandé, mais d’essayer d’appréhender toute la chaine de réflexion, à savoir du besoin de clients jusqu’à l’utilisation d’un nouveau service/produit développé et de voir comment celui-ci est utilisé et ce qu’il faut améliorer.
Pour cela, et avec les conseils avisés d’un ami , Maxime Salomon, j’ai commencé à lire The Lean Startup de Eric Ries. Ce livre aborde de nombreux sujets autour de l’entrepreneuriat, du marketing ainsi que de développement de produit à proprement parlé. L’idée est de proposer un cycle itératif de développement pouvant permettre de mesurer rapidement différents paramètres pour faire évoluer un produit en fonction de nouvelles données.
Etant d’un formation plus scientifique, j’ai ce besoin de mettre en pratique ce dont il est question pour mieux comprendre la solution proposée, j’ai aussi un besoin de me documenter sur les différents termes employés pour ne pas passer à côté du sujet, c’est pourquoi je prends mon temps pour lire ce livre, mais je vous propose mon retour d’expérience sur mes premiers acquis et comment j’essaie de les mettre en pratique.
UP24 - Découverte du bracelet connecté de Jawbone
Nous découvrons chaque jour de plus en plus d’objets connectés, ils se divisent en plusieurs catégories comme la santé, la musique, la lumière, etc. Une bonne partie se retrouve aussi dans le tracking d’activité comme le bracelet Jawbone UP. Etant intéressé de connaitre les performances de ces objets connectés dit “wearable”, je vous propose mon retour d’experience sur le bracelet UP24 ainsi que les services proposés autour.
Introduction à Soundcloud
Soundcloud est une des plus grosses plateformes de musique indépendante, c’est plus de 200 millions d’utilisateurs pour ce réseau sociale basé sur le partage musicale. Certains artistes ne publient leurs musiques que sur cette plateforme. C’est aussi la place pour des novices qui veulent essayer leurs titres et se faire connaitre. Vous pouvez aussi y retrouver des discours, des podcasts et tout autres types de contenu audio.
Dans cette optique de toujours avoir de la bonne musique, Soundcloud est disponible sur toutes les plateformes (web et mobile) et l’écoute est gratuite. Pour une utilisation encore plus variée de leur service, SoundCloud propose une API ainsi que de nombreux SDK (Javascript, Ruby, Python, PHP, Cocoa et Java). Nous allons voir ensemble comment intégrer SoundCloud dans une application mobile iPhone.
Comment réussir son premier entretien
Passer un entretien pour un poste est toujours un peu stressant. Suivant comment ce stress est géré, la personne peut donner une image de quelqu’un qui n’est pas sûre de soi par ses gestes (tremblement, bafouillement, se frotter les mains) ou par ses mots (ne pas finir ses phrases, phrases à rallonge trop complexe, etc). Difficile dans ces cas là de donner la meilleure image de soi pour montrer qu’on est travailleur, motivé et prêt à l’emploi.
Je vous propose par mon retour d’experience quelques conseils simples.
Spotify et ses outils d'intégration
Après avoir travaillé avec les technologies Deezer, nous allons voir quels outils sont proposés par Spotify pour une intégration web ou mobile. Spotify proposant une écoute gratuite sur son client ordinateur et depuis peu sur mobile (parsemé de publicité), il se démarque de Deezer qui nécessite d’avoir un compte Premium pour une utilisation sur smartphone. L’intégration pour les développeurs est aussi différente, mais à quelle mesure? C’est ce que nous allons voir.
Hackathon: ma maison connectėe
Les objets connectės sont de plus en plus présents chez nous. On y retrouve des produits comme des ampoules, des enceintes audio ainsi que des prises intelligentes. On y retrouve aussi des produits plus innovants comme le pèse personne de Withings, la balle de Sphero, la lampe connectée “holî” ou encore le capteur pour plante de Parrot.
C’est dans cette optique là que l’entreprise Direct Energie a organisée un hackathon autour des objets connectés pour présenter différentes solutions autour de la maîtrise d’énergie et des objets intelligents.
C’est en tant que support technique sur le produit “holî” et son SDK que j’y ai participé, afin d’aider les développeurs à se familiariser avec l’outil. Ayant fait un hackathon du côté développeur, c’est un nouveau retour d’expérience cette fois ci du côté partenaire.
SpriteKit, un framework iOS7 pour jeu video
Au jour d’aujourd’hui, les jeux vidéos sont de plus en plus présent. Avec l’univers du smartphone, il est de plus en plus facile d’embarquer des jeux vidéos avec nous et ce partout.
Plusieurs jeux ont eu un tel succès qu’il reste difficile d’ignorer cet utilisation de nos téléphones en tant que console. A n’en citer que quelques-uns: DoodleJump, AngryBird ou encore le fameux CandyCrush.
Depuis la sortie d’iOS7, Apple a rajouté un framework de jeu vidéo 2D directement dans son SDK: SpriteKit. Nous allons voir ensemble comment l’utiliser.
Fhacktory, un hackathon nouvelle génération
Un hackathon est l’équivalent d’un marathon sur le domaine du développement informatique. Bien connu sous le système de “Startup Weekend”, ce principe a été adapté dans l’informatique au développement de projet en un temps donné. Le but est de monter en un weekend une équipe qui évoluera autour d’une idée et proposera une solution à un problème. J’ai récemment participé à l’un d’entre eux, le Fhactory: un hackathon se définissant “100% hack, 0% bullshit” et voici mon retour d’expérience.
À la découverte des outils de Deezer
Deezer étant l’une des plus grosse plateforme d’écoute et de partage de musique, il est intéressant de voir comment se servir des différents outils qu’il nous met à disposition à savoir son API de recherche de morceau et ses différents SDK pour une intégration web ou mobile.
Nous allons voir ensemble, comment les utiliser, à quelles fins et quelles en sont les limites. Pour le SDK, je ne m’intéresserai qu’à celui pour iOS.
iJump, une application iPhone pour les parachutistes
En lançant le portail web de météo Weather, mon idée était d’en faire un support pour une version mobile. En effet l’intérêt pour des données météorologiques est de rester nomade et suivre son utilisateur. En intégrant différentes notions associées à la chute libre et avec l’aide de la Fédération Française de Parachutisme, voici iJump: l’application mobile pour les parachutistes.
La formation au développement mobile
Il y a maintenant 6 mois, j’ai commencé une formation afin de devenir enseignant sur les languages Cocoa et Objective-C.
Cette formation a compris plusieurs étapes, chacune finissant par un examen afin de passer à la suivante:
- Une partie pédagogique au cours de laquelle nous sommes évalués sur notre capacité à communiquer un message, à faire comprendre une technologie, à la gestion de notre temps de parole ainsi qu’à la tenue une classe.
- Une partie technique où l’évaluation se portait exclusivement sur la connaissance des technologies auxquelles je m’étais proposé. Pour ma part, cela m’a permis de revoir les fondements de Cocoa ainsi que de l’historique la société NeXT.
Voici mes différents retours sur ma première experience de formateur.
Sencha Touch: framework HTML5 pour application mobile
Introduction:
Sencha est un framework HTML5 pour créer des application mobiles multiplateformes. L’intérêt de celui-ci est de faire, à partir d’un projet HTML et de code JSON, une même application mobile sur plusieurs plateformes, un gain de temps incroyable si le code s’y tient. Nous allons voir les premiers pas d’une application à partir de Sencha.
MVVM Light sous Windows Phone 8 SDK
Le nouveau système d’exploitation Windows 8 va de paire avec la mise à jour de son système sur mobile: Windows Phone 8.
Voici une petite introduction à MVVM Light Toolkit, un jeu de composant se basant sur une structure Model-View-ViewModel sur les frameworks XAML/C#, pouvant être utilisé pour un développement sur Windows Phone 8.
Réalisation: Weather, un portail météo pour la chute libre
Contexte:
Ayant récemment été initié à la chute libre, cette discipline est largement dépendante de la météo.
Malheureusement, trouver la météo en temps en “temps réel” suivant son centre de saut n’est pas chose aisé. Même à 10km de son centre de saut, la différence météorologique peut être significative quant à la pratique du parachutisme.
C’est pourquoi j’ai décidé de developper un portail web permettant de consulter le dernier relevé météo de n’importe quel centre de saut en France, datant de moins de 12h.
Intégration de DataMapper dans CodeIgniter
Introduction:
Un ORM (Object-relational mapping) est utilisé dans la programmation orienté objet afin de créer virtuellement un modèle en se basant sur une base de donnée. Cela évite de devoir écrire les requêtes dans la base de donnée soit même, un vrai gain de temps.
Réalisation: iDevWeb - Mise à jour
Librairie Restkit et synchronisation de données
Introduction
La synchronisation de données en ligne est une pratique courante afin d’avoir un contenu mis à jour à chaque utilisation (applications d’informations, de news et autres).
Trouver un moyen simple d’embarquer ces données avant une synchronisation en ligne est intéressant, permettant une utilisation de l’application même si les données ne sont pas à jour.
Travaillant en Objective-C sur des applications mobiles pour iphone/ipad, nous allons voir comment utiliser Restkit à ces fins.
Quel-camping.fr
Après avoir fini ma première année d’étude en informatique, j’ai eu l’idée de réaliser un site internet pour une première experience professionnelle à mon compte.
Des idées à l’étude:
Après quelques idées ainsi que des conseils avisés d’un jeune entrepreneur, j’ai décidé de choisir la branche du tourisme et plus précisément le domaine de l’hotellerie de plein air.
En effet, ce domaine est peu exploité sur internet alors que le nombre de réservation de séjour en camping continuait d’augmenter.
Réalisation: iDevWeb - Gestion de projets web
Quand on est développeur web, il arrive qu’on travaille sur plusieurs projets en même temps et qu’on conserve d’anciens projets sans les supprimer.
En utilisant MAMP sous MAC OS X, il faut accéder à l’url exacte du dossier pour pouvoir accéder au site web, il n’existe pas par défaut une page qui indexe les dossiers contenus dans le dossier de développement.
C’est là que j’ai eu l’idée de développer un petit portail en php qui listerait les dossiers contenus dans mon dossier de développement, cela éviterait de devoir se rappeler du nom du projet ainsi que du chemin exacte pour y accéder.
Réécriture d'urls avec htaccess sous CodeIgniter
Le principe de réécriture d’urls permet de “transformer” les urls pour référencer plus simplement des pages clés d’un site internet. Pour cela on utilise le fichier htaccess, un fichier caché situé à la racine du dossier de l’application.
Nous allons voir comment est géré par défaut les urls dans le framework CodeIgniter et comment les modifier pour éviter de perdre le référencement déjà acquis sur un site web.
CodeIgniter et son modèle MVC
CodeIgniter est un framework php open source basé sur une architecture MVC.
Rappel:
L’architecture MVC (Modèle – Vue – Controller) permet d’organiser plus simplement une application.
- Modèle : type de données, objet
- Vue: interface avec l’utilisateur
- Contrôleur: traitement des données, gestion des évènements.
Un framework est un kit qui permet de créer la base d’une application plus rapidement et avec une structure plus solide.
Présentation:
CodeIgniter a pour avantage d’être libre mais surtout d’être plus léger comparé aux autres frameworks php connus. Il possède un “guide utilisateur” (en ligne sur le site officiel et localement dans le dossier téléchargé) plus que complet qui propose de nombreux exemples d’applications. La mise en place est intuitive et aucune configuration n’est nécessaire pour une utilisation simple.

Tips, Tricks, and Techniques on using Cascading Style Sheets.
Getting Clarity on Apple’s Liquid Glass
- Notes
- apple
- design systems
- Graphic Design
- UI/IX Design
Gathered notes on Liquid Glass, Apple’s new design language that was introduced at WWDC 2025. These links are a choice selection of posts and resources that I've found helpful for understanding the context of Liquid Glass, as well as techniques for recreating it in code.
Getting Clarity on Apple’s Liquid Glass originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Folks have a lot to say about “liquid glass,” the design aesthetic that Apple introduced at WWDC 2025. Some love it, some hate it, and others jumped straight into seeing how to they could create it in CSS. There’s a lot to love, hate, and experience with liquid glass. You can love the way content reflects against backgrounds. You can hate the poor contrast between foreground and background. And you can be eager to work with it. All of those can be true at the same time. Image credit: Apple I, for one, am generally neutral with things like this for that exact reason. I’m intrigued by liquid glass, but hold some concern about legibility, particularly as someone who already struggles with the legibility of Apple’s existing design system (notably in Control Center). And I love looking at the many and clever ways that devs have tried to replicate liquid glass in their own experiments. So, I’m in the process of gathering notes on the topic as I wrap my head around this “new” (or not-so-new, depending on who’s talking) thing and figure out where it fits in my own work. These links are a choice selection of posts that I’ve found helpful and definitely not meant to be an exhaustive list of what’s out there. WWDC Introduction Always a good idea to start with information straight from the horse’s mouth. In short: It’s the first design system that is universally applied to all of Apple’s platforms, as opposed to a single platform like Apple’s last major overhaul, iOS 7. It’s designed to refract light and dynamically react to user interactions. By “dynamic” we’re referring to UI elements updating into others as the context changes, such as displaying additional controls. This sounds a lot like the Dynamic Island, supporting shape-shifting animations. There’s a focus on freeing up space by removing hard rectangular edges, allowing UI elements to become part of the content and respond to context. Apple also released a more in-depth video aimed at introducing liquid glass to designers and developers. In short: Liquid glass is an evolution of the “aqua” blue interface from macOS 10, the real-time introduced in iOS 7, the “fluidity” of iOS 10, the flexibility of the Dynamic Island, and the immersive interface of visionOS. It’s a “digital meta-material” that dynamically bends and shapes light while moving fluidly like water. It’s at least partially a response to hardware devices adopting deeper rounded corners. Lensing: Background elements are bended and warped rather than scattering light as it’s been in previous designs. There’s gel-like feel to elements. Translucence helps reveal what is underneath a control, such as a progress indicator you can scrub more precisely by seeing what is behind the surface. Controls are persistent between views for establishing a relationship between controls and states. This reminds me of the View Transition API. Elements automatically adapt to light and dark modes. Liquid glass is composed of layers: highlight (light casting and movement), shadow (added depth for separation between foreground and background), and illumination (the flexible properties of the material). It is not meant to be used everywhere but is most effective for the navigation layer. And avoid using glass on glass. There are two variants: regular (most versatile) and clear (does not have adaptive behaviors for allowing content to be more visible below the surface). Glass can be tinted different colors. Documentation Right on cue, Apple has already made a number of developer resources available for using and implementing liquid glass that are handy references. Introduction to Liquid Glass Adopting Liquid Glass Landmarks: Building an app with Liquid Glass Applying Liquid Glass to custom views ‘Beautiful’ and ‘Hard to Read’: Designers React to Apple’s Liquid Glass Update This Wired piece is a nice general overview of what liquid glass is and context about how it was introduced at WWDC 2025. I like getting a take on this from a general tech perspective as opposed to, say, someone’s quick hot take. It’s a helpful pulse on what’s happening from a high level without a bunch of hyperbole, setting the stage for digging deeper into things. In short: Apple is calling this “Liquid Glass.” It’s Apple’s first significant UI overhaul in 10 years. It will be implemented across all of Apple’s platforms, including iOS, macOS, iPadOS, and even the Vision Pro headset from which it was inspired. “From a technical perspective, it’s a very impressive effect. I applaud the time and effort it must have taken to mimic refraction and dispersion of light to such a high degree.” “Similar to the first beta for iOS 7, what we’ve seen so far is rough on the edges and potentially veers into distracting or challenging to read, especially for users with visual impairments.” Accessibility Let’s get right to the heart of where the pushback against liquid glass is coming from. While the aesthetic, purpose, and principles of liquid glass are broadly applauded, many are concerned about the legibility of content against a glass surface. Traditionally, we fill backgrounds with solid or opaque solid color to establish contrast between the foreground and background, but with refracted light, color plays less a role and it’s possible that highlighting or dimming a light source will not produce enough contrast, particularly for those with low-vision. WCAG 2.2 emphasizes color and font size for improving contrast and does provide guidance for something that’s amorphous like liquid glass where bending the content below it is what establishes contrast. “Apple’s “Liquid Glass” and What It Means for Accessibility”: “When you have translucent elements letting background colors bleed through, you’re creating variable contrast ratios that might work well over one background, but fail over a bright photo of the sunset.” “Apple turned the iPhone’s notch into the Dynamic Island, Android phones that don’t have notches started making fake notches, just so they could have a Dynamic Island too. That’s influence. But here they are making what looks like a purely aesthetic decision without addressing the accessibility implications.” “People with dyslexia, who already struggle with busy backgrounds and low-contrast text, now deal with an interface where visual noise is baked into the design language. People with attention disorders may have their focus messed up when they see multiple translucent layers creating a whole lot of visual noise.” “It’s like having a grand entrance and a side door marked ‘accessible.’ Technically compliant. But missing the point.” “The legal landscape adds another layer. There’s thousands of digital accessibility lawsuits filed in the U.S. yearly for violating the ADA, or the American Disabilities Act. Companies are paying millions in settlements. But this is Apple. They have millions. Plus all the resources in the world to save them from legal risks. But their influence means they’re setting precedents.” “Liquid Glass: Apple vs accessibility”: “Yet even in Apple’s press release, linked earlier, there are multiple screenshots where key interface components are, at best, very difficult to read. That is the new foundational point for Apple design. And those screenshots will have been designed to show the best of things.” “Apple is still very often reactive rather than proactive regarding vision accessibility. Even today, there are major problems with the previous versions of its operating systems (one example being the vestibular trigger if you tap-hold the Focus button in Control Centre). One year on, they aren’t fixed.” “State, correctly, that Apple is a leader in accessibility. But stop assuming that just because this new design might be OK for you and because Apple has controls in place that might help people avoid the worst effects of design changes, everything is just peachy. Because it isn’t.” “Liquid Glass” by Hardik Pandya “The effect is technically impressive, but it introduces a layer of visual processing between you and your memories. What was once immediate now feels mediated. What was once direct now feels filtered.” “While Apple’s rationale for Liquid Glass centers on ‘seeing’ content through a refractive surface, user interface controls are not meant to be seen—they are meant to be operated. When you tap a button, slide a slider, or toggle a switch, you are not observing these elements. You are manipulating them directly.” “Buttons become amorphous shapes. Sliders lose their mechanical clarity. Toggle switches abandon their physical affordances. They appear as abstract forms floating behind glass—beautiful perhaps, but disconnected from the fundamental purpose of interface controls: to invite and respond to direct manipulation.” “The most forward-thinking interface design today focuses on invisibility – making the interaction so seamless that the interface itself disappears. Liquid Glass makes the interface more visible, more present, and more demanding of attention.” “Liquid glass, now with frosted tips”: It’s easy to dump on liquid glass in its introductory form, but it’s worth remembering that it’s in beta and that Apple is actively developing it ahead of its formal release. A lot has changed between the Beta 2 and Beta 3 releases. The opacity between glass and content has been bumped up in several key areas. Tutorials, Generators, and Frameworks It’s fun to see the difference approaches many folks have used to re-create the liquid glass effect in these early days. It amazes me that there is already a deluge of tutorials, generators, and even UI frameworks when we’re only a month past the WWDC 2025 introduction. Create this trendy blurry glass effect with CSS (Kevin Powell) Liquid Glass design using CSS (Nordcraft) Adopting Apple’s Liquid Glass: Examples and best practices (LogRocket) Liquid Glass Figma File CSS Liquid Glass Effects (DesignFast) Liquid Glass UI Framework Liquid Glass CSS Generator Experiments Let’s drop in a few interesting demos that folks have created. To be clear, glass-based interfaces are not new and have been plenty explored, which you can find over at CodePen in abundance. These are recent experiments. The most common approaches appear to reach for SVG filters and background blurs, though there are many programmatic demos as well. Using a CSS-only approach with an SVG filter with backdrop-filter with a series of nested containers that sorta mimics how Apple describes glass as being composed of three layers (highlight, shadow and illumination): Same sort of deal here, but in the context of a theme toggle switch that demonstrates how glass can be tinted: Comparing a straight-up CSS blur with an SVG backdrop: Contextual example of a slider component: Using WebGL: Assorted links and coverage A few more links from this browser tab group I have open: “Apple’s Liquid Glass is exactly as ambitious as Apple” (Fast Company) “Apple unveils iOS 26 with Liquid Glass” (9to5Mac) “Apple Announces All-New ‘Liquid Glass’ Software Redesign Across iOS 26 and More” (MacRumors) “Apple just added more frost to its Liquid Glass design” (The Verge) “Apple tones down Liquid Glass effect in iOS 26 beta 3” (The Apple Post) “More assorted notes on Liquid Glass” (Riccardo Mori) A bunch of CodePen Collections Getting Clarity on Apple’s Liquid Glass originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
What I Took From the State of Dev 2025 Survey
- Articles
- opinion
- survey
State of Devs 2025 survey results are out! Sunkanmi Fafowora highlights a few key results about diversity, health, and salaries.
What I Took From the State of Dev 2025 Survey originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
State of Devs 2025 survey results are out! While the survey isn’t directly related to the code part of what we do for work, I do love the focus Devographics took ever since its inception in 2020. And this year it brought us some rather interesting results through the attendance of 8,717 developers, lots of data, and even more useful insights that I think everyone can look up and learn from. I decided to look at the survey results with an analytical mindset, but wound up pouring my heart out because, well, I am a developer, and the entire survey affects me in a way. I have some personal opinions, it turns out. So, sit back, relax, and indulge me for a bit as we look at a few choice pieces of the survey. And it’s worth noting that this is only part one of the survey results. A second data dump will be published later and I’m interested to poke at those numbers, too. An opportunity to connect One thing I noticed from the Demographics section is how much tech connects us all. The majority of responses come from the U.S. (26%) but many other countries, including Italy, Germany, France, Estonia, Austria, South Africa and many more, account for the remaining 74%. I mean, I am working and communicating with you right now, all the way from Nigeria! Isn’t that beautiful, to be able to communicate with people around the world through this wonderful place we call CSS-Tricks? And into the bigger community of developers that keeps it so fun? I think this is a testament to how much we want to connect. More so, the State of Devs survey gives us an opportunity to express our pain points on issues surrounding our experiences, workplace environments, quality of health, and even what hobbies we have as developers. And while I say developers, the survey makes it clear it’s more than that. Behind anyone’s face is someone encountering life challenges. We’re all people and people are capable of pure emotion. We are all just human. It’s also one of the reasons I decided to open a Bluesky account: to connect with more developers. I think this survey offers insights into how much we care about ourselves in tech, and how eager we are to solve issues rarely talked about. And the fact that it’s global in nature illustrates how much in common we all have. More women participated this year From what I noticed, fewer women participated in the 2024 State of JavaScript and State of CSS fewer women (around 6%), while women represented a bigger share in this year’s State of Devs survey. I’d say 15% is still far too low to fairly “represent” an entire key segment of people, but it is certainly encouraging to see a greater slice in this particular survey. We need more women in this male-dominated industry. Experience over talent Contrary to popular opinion, personal performance does not usually equate to higher pay, and this is reflected in the results of this survey. It’s more like, the more experienced you are, the more you’re paid. But even that’s not the full story. If you’re new to the field, you still have to do some personal marketing, find and keep a mentor, and a whole bunch of stuff. Cassidy shares some nice insights on this in a video interview tracing her development career. You should check it out, especially if you’re just starting out. Notice that the average income for those with 10-14 of experience ($115,833) is on par with those with between 15-29 years of experience ($118,000) and not far from those with 30+ years ($120,401). Experience appears to influence income, but perhaps not to the extent you would think, or else we’d see a wider gap between those with 15 years versus those with more than double the service time. More than that, notice how income for the most experienced developers (30+ years) is larger on average but the range of how much they make is lower than than those with 10-29 years under their belts. I’m curious what causes that decline. Is it a lack of keeping up with what’s new? Is it ageism? I’m sure there are lots of explanations. Salary, workplace, and job hunting I prefer not drill into each and every report. I’m interested in very specific areas that are covered in the survey. And what I take away from the survey is bound to be different than your takeaways, despite numbers being what they are. So, here are a few highlights of what stood out to me personally as I combed through the results. Your experience, employment status, and company’s employer count seem to directly affect pay. For example, full-timers report higher salaries than freelancers. I suppose that makes sense, but I doubt it provides the full picture because freelancers freelance for a number of reasons, whether its flexible hours, having more choice to choose their projects, or having personal constraints that limit how much they can work. In some ways, freelancers are able to command higher pay while working less. Bad management and burnout seem to be the most talked-about issues in the workplace. Be on guard during interviews, look up reviews about the company you’re about to work for, and make sure there are far fewer complaints than accolades. Make sure you’re not being too worked up during work hours; breaks are essential for a boost in productivity. Seventy percent of folks reported no discrimination in the workplace, which means we’re perhaps doing something right. That said, it’s still disheartening that 30% experience some form of discrimination and lowering that figure is something we ought to aim for. I’m hoping companies — particularly the tech giants in our space — take note of this and enforce laws and policies surrounding this. Still, we can always call out discriminatory behavior and make corrections where necessary. And who’s to say that everyone who answered the survey felt safe sharing that sort of thing? Silence can be the enemy of progress. Never get too comfortable in your job. Although 69% report having never been laid off, I still think that job security is brittle in this space. Always learn, build, and if possible, try to look for other sources of income. Layoffs are still happening, and looking at the news, it’s likely to continue for the foreseeable future, with the U.S., Australia, and U.K. being leading the way. One number that jumped off the page for me is that it takes an average of four applications for most developers to find a new job. This bamboozles me. I’m looking for a full-time role (yes, I’m available!), and I regularly apply for more than four jobs in a given day. Perhaps I’m doing something wrong, but that’s also not consistent with those in my social and professional circles. I know and see plenty of people who are working hard to find work, and the number of jobs they apply for has to bring that number up. Four applications seems way low, though I don’t have the quantitative proof for it. Your personal network is still the best way to find a job. We will always and forever be social animals, and I think that’s why most survey participants say that coworker relationships are the greatest perk of a job. I find this to be true with my work here at CSS-Tricks. I get to collaborate with other like-minded CSS and front-end enthusiasts far and wide. I’ve developed close relationships with the editors and other writers, and that’s something I value more than any other benefits I could get somewhere else. Compensation is still a top workplace challenge. JavaScript is still the king of programming (bias alert), taking the top spot as the most popular programming language. I know you’re interested, that CSS came in at third. To my surprise, Bluesky is more popular amongst developers than X. I didn’t realize how much toxicity I’ve been exposed to at X until I opened a Bluesky account. I hate saying that the “engagement” is better, or some buzz-worthy thing like that, but I do experience more actual discussions over at Bluesky than I have for a long time at X. And many of you report the same. I hasten to say that Bluesky is a direct replacement for what X (let’s face it, Twitter) used to be, but it seems we at least have a better alternative. Health issues Without our health, we are nothing. Embrace your body for what it is: your temple. It’s a symbiotic relationship. — Mrs. N. I’m looking closer at the survey’s results on health because of the sheer number of responses that report health issues. I struggle with issues, like back pains, and that forced me to upgrade my work environment with a proper desk and chair. I tend to code on my bed, and well, it worked. But perhaps it wasn’t the best thing for my physical health. I know we can fall into the stereotype of people who spend 8-12 hours staring at two big monitors, sitting in a plush gaming chair, while frantically typing away at a mechanical keyboard. You know, the Hackers stereotype. I know that isn’t an accurate portrayal of who we are, but it’s easy to become that because of how people look at and understand our work. And if you feel a great deal of pressure to keep up with that image, I think it’s worth getting into a more healthy mindset, one that gets more than a few hours of sleep, prioritizes exercise, maintains a balanced diet, and all those things we know are ultimately good for us. Even though 20% of folks say they have no health issues at all, a whopping 80% struggle with health issues ranging from sleep deprivation to keeping a healthy weight. You are important and deserve to feel healthy. Think about your health the way you think about the UI/UX of the websites you design and build. It makes up a part of the design, but has the crucial role of turning ordinary tasks into enjoyable experiences, which in turn, transforms into an overall beautiful experience for the user. Your health is the same. Those small parts often overlooked can and will affect the great machine that is your body. Here’s a small list of life improvements you can make right now. Closing thoughts Diversity, representation, experience, income, and health. That’s what stood out to me in the 2025 State of Devs survey results. I see positive trends in the numbers, but also a huge amount of opportunity to be better, particularly when it comes being more inclusive of women, providing ample chances for upward mobility based on experience, and how we treat ourselves. Please check out the results and see what stands out to you. What do you notice? Is there anything you are able to take away from the survey that you can use in your own work or life? I’d love to know! What I Took From the State of Dev 2025 Survey originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Setting Line Length in CSS (and Fitting Text to a Container)
- Articles
- typography
The many ways to juggle line length when working with text... including two proposed properties that could make it easier in the future.
Setting Line Length in CSS (and Fitting Text to a Container) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
First, what is line length? Line length is the length of a container that holds a body of multi-line text. “Multi-line” is the key part here, because text becomes less readable if the beginning of a line of text is too far away from the end of the prior line of text. This causes users to reread lines by mistake, and generally get lost while reading. Luckily, the Web Content Accessibility Guidelines (WCAG) gives us a pretty hard rule to follow: no more than 80 characters on a line (40 if the language is Chinese, Japanese, or Korean), which is super easy to implement using character (ch) units: width: 80ch; The width of 1ch is equal to the width of the number 0 in your chosen font, so the exact width depends on the font. Setting the optimal line length Just because you’re allowed up to 80 characters on a line, it doesn’t mean that you have to aim for that number. A study by the Baymard Institute revealed that a line length of 50-75 characters is the optimal length — this takes into consideration that smaller line lengths mean more lines and, therefore, more opportunities for users to make reading mistakes. That being said, we also have responsive design to think about, so setting a minimum width (e.g., min-width: 50ch) isn’t a good idea because you’re unlikely to fit 50 characters on a line with, for example, a screen/window size that is 320 pixels wide. So, there’s a bit of nuance involved, and the best way to handle that is by combining the clamp() and min() functions: clamp(): Set a fluid value that’s relative to a container using percentage, viewport, or container query units, but with minimum and maximum constraints. min(): Set the smallest value from a list of comma-separated values. Let’s start with min(). One of the arguments is 93.75vw. Assuming that the container extends across the whole viewport, this’d equal 300px when the viewport width is 320px (allowing for 20px of spacing to be distributed as you see fit) and 1350px when the viewport width is 1440px. However, for as long as the other argument (50ch) is the smallest of the two values, that’s the value that min() will resolve to. min(93.75vw, 50ch); Next is clamp(), which accepts three arguments in the following order: the minimum, preferred, and maximum values. This is how we’ll set the line length. For the minimum, you’d plug in your min() function, which sets the 50ch line length but only conditionally. For the maximum, I suggest 75ch, as mentioned before. The preferred value is totally up to you — this will be the width of your container when not hitting the minimum or maximum. width: clamp(min(93.75vw, 50ch), 70vw, 75ch); In addition, you can use min(), max(), and calc() in any of those arguments to add further nuance. If the container feels too narrow, then the font-size might be too large. If it feels too wide, then the font-size might be too small. Fit text to container (with JavaScript) You know that design trend where text is made to fit the width of a container? Typically, to utilize as much of the available space as possible? You’ll often see it applied to headings on marketing pages and blog posts. Well, Chris wrote about it back in 2018, rounding up several ways to achieve the effect with JavaScript or jQuery, unfortunately with limitations. However, the ending reveals that you can just use SVG as long as you know the viewBox values, and I actually have a trick for getting them. Although it still requires 3-5 lines of JavaScript, it’s the shortest method I’ve found. It also slides into HTML and CSS perfectly, particularly since the SVG inherits many CSS properties (including the color, thanks to fill: currentColor): <h1 class="container"> <svg> <text>Fit text to container</text> </svg> </h1> h1.container { /* Container size */ width: 100%; /* Type styles (<text> will inherit most of them) */ font: 900 1em system-ui; color: hsl(43 74% 3%); text { /* We have to use fill: instead of color: here But we can use currentColor to inherit the color */ fill: currentColor; } } /* Select all SVGs */ const svg = document.querySelectorAll("svg"); /* Loop all SVGs */ svg.forEach(element => { /* Get bounding box of <text> element */ const bbox = element.querySelector("text").getBBox(); /* Apply bounding box values to SVG element as viewBox */ element.setAttribute("viewBox", [bbox.x, bbox.y, bbox.width, bbox.height].join(" ")); }); Fit text to container (pure CSS) If you’re hell-bent on a pure-CSS method, you are in luck. However, despite the insane things that we can do with CSS these days, Roman Komarov’s fit-to-width hack is a bit complicated (albeit rather impressive). Here’s the gist of it: The text is duplicated a couple of times (although hidden accessibly with aria-hidden and hidden literally with visibility: hidden) so that we can do math with the hidden ones, and then apply the result to the visible one. Using container queries/container query units, the math involves dividing the inline size of the text by the inline size of the container to get a scaling factor, which we then use on the visible text’s font-size to make it grow or shrink. To make the scaling factor unitless, we use the tan(atan2()) type-casting trick. Certain custom properties must be registered using the @property at-rule (otherwise they don’t work as intended). The final font-size value utilizes clamp() to set minimum and maximum font sizes, but these are optional. <span class="text-fit"> <span> <span class="text-fit"> <span><span>fit-to-width text</span></span> <span aria-hidden="true">fit-to-width text</span> </span> </span> <span aria-hidden="true">fit-to-width text</span> </span> .text-fit { display: flex; container-type: inline-size; --captured-length: initial; --support-sentinel: var(--captured-length, 9999px); & > [aria-hidden] { visibility: hidden; } & > :not([aria-hidden]) { flex-grow: 1; container-type: inline-size; --captured-length: 100cqi; --available-space: var(--captured-length); & > * { --support-sentinel: inherit; --captured-length: 100cqi; --ratio: tan( atan2( var(--available-space), var(--available-space) - var(--captured-length) ) ); --font-size: clamp( 1em, 1em * var(--ratio), var(--max-font-size, infinity * 1px) - var(--support-sentinel) ); inline-size: var(--available-space); &:not(.text-fit) { display: block; font-size: var(--font-size); @container (inline-size > 0) { white-space: nowrap; } } /* Necessary for variable fonts that use optical sizing */ &.text-fit { --captured-length2: var(--font-size); font-variation-settings: "opsz" tan(atan2(var(--captured-length2), 1px)); } } } } @property --captured-length { syntax: "<length>"; initial-value: 0px; inherits: true; } @property --captured-length2 { syntax: "<length>"; initial-value: 0px; inherits: true; } Watch for new text-grow/text-shrink properties To make fitting text to a container possible in just one line of CSS, a number of solutions have been discussed. The favored solution seems to be two new text-grow and text-shrink properties. Personally, I don’t think we need two different properties. In fact, I prefer the simpler alternative, font-size: fit-width, but since text-grow and text-shrink are already on the table (Chrome intends to prototype and you can track it), let’s take a look at how they could work. The first thing that you need to know is that, as proposed, the text-grow and text-shrink properties can apply to multiple lines of wrapped text within a container, and that’s huge because we can’t do that with my JavaScript technique or Roman’s CSS technique (where each line needs to have its own container). Both have the same syntax, and you’ll need to use both if you want to allow both growing and shrinking: text-grow: <fit-target> <fit-method>? <length>?; text-shrink: <fit-target> <fit-method>? <length>?; <fit-target> per-line: For text-grow, lines of text shorter than the container will grow to fit it. For text-shrink, lines of text longer than the container will shrink to fit it. consistent: For text-grow, the shortest line will grow to fit the container while all other lines grow by the same scaling factor. For text-shrink, the longest line will shrink to fit the container while all other lines shrink by the same scaling factor. <fit-method> (optional) scale: Scale the glyphs instead of changing the font-size. scale-inline: Scale the glyphs instead of changing the font-size, but only horizontally. font-size: Grow or shrink the font size accordingly. (I don’t know what the default value would be, but I imagine this would be it.) letter-spacing: The letter spacing will grow/shrink instead of the font-size. <length> (optional): The maximum font size for text-grow or minimum font size for text-shrink. Again, I think I prefer the font-size: fit-width approach as this would grow and shrink all lines to fit the container in just one line of CSS. The above proposal does way more than I want it to, and there are already a number of roadblocks to overcome (many of which are accessibility-related). That’s just me, though, and I’d be curious to know your thoughts in the comments. Conclusion It’s easier to set line length with CSS now than it was a few years ago. Now we have character units, clamp() and min() (and max() and calc() if you wanted to throw those in too), and wacky things that we can do with SVGs and CSS to fit text to a container. It does look like text-grow and text-shrink (or an equivalent solution) are what we truly need though, at least in some scenarios. Until we get there, this is a good time to weigh-in, which you can do by adding your feedback, tests, and use-cases to the GitHub issue. Setting Line Length in CSS (and Fitting Text to a Container) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Scroll-Driven Sticky Heading
- Articles
- position
- Scroll Driven Animation
I was playing around with scroll-driven animations, just searching for all sorts of random things you could do. That’s when I came up with the idea to animate main headings and, using scroll-driven animations, change the headings based on the user’s scroll position.
Scroll-Driven Sticky Heading originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Scroll-driven animations are great! They’re a powerful tool that lets developers tie the movement and transformation of elements directly to the user’s scroll position. This technique opens up new ways to create interactive experiences, cuing images to appear, text to glide across the stage, and backgrounds to subtly shift. Used thoughtfully, scroll-driven animations (SDA) can make your website feel more dynamic, engaging, and responsive. A few weeks back, I was playing around with scroll-driven animations, just searching for all sorts of random things you could do with it. That’s when I came up with the idea to animate the text of the main heading (h1) and, using SDA, change the heading itself based on the user’s scroll position on the page. In this article, we’re going to break down that idea and rebuild it step by step. This is the general direction we’ll be heading in, which looks better in full screen and viewed in a Chromium browser: It’s important to note that the effect in this example only works in browsers that support scroll-driven animations. Where SDA isn’t supported, there’s a proper fallback to static headings. From an accessibility perspective, if the browser has reduced motion enabled or if the page is being accessed with assistive technology, the effect is disabled and the user gets all the content in a fully semantic and accessible way. Just a quick note: this approach does rely on a few “magic numbers” for the keyframes, which we’ll talk about later on. While they’re surprisingly responsive, this method is really best suited for static content, and it’s not ideal for highly dynamic websites. Closer Look at the Animation Before we dive into scroll-driven animations, let’s take a minute to look at the text animation itself, and how it actually works. This is based on an idea I had a few years back when I wanted to create a typewriter effect. At the time, most of the methods I found involved animating the element’s width, required using a monospace font, or a solid color background. None of which really worked for me. So I looked for a way to animate the content itself, and the solution was, as it often is, in pseudo-elements. Pseudo-elements have a content property, and you can (kind of) animate that text. It’s not exactly animation, but you can change the content dynamically. The cool part is that the only thing that changes is the text itself, no other tricks required. Start With a Solid Foundation Now that you know the trick behind the text animation, let’s see how to combine it with a scroll-driven animation, and make sure we have a solid, accessible fallback as well. We’ll start with some basic semantic markup. I’ll wrap everything in a main element, with individual sections inside. Each section gets its own heading and content, like text and images. For this example, I’ve set up four sections, each with a bit of text and some images, all about Primary Colors. <main> <section> <h1>Primary Colors</h1> <p>The three primary colors (red, blue, and yellow) form the basis of all other colors on the color wheel. Mixing them in different combinations produces a wide array of hues.</p> <img src="./colors.jpg" alt="...image description"> </section> <section> <h2>Red Power</h2> <p>Red is a bold and vibrant color, symbolizing energy, passion, and warmth. It easily attracts attention and is often linked with strong emotions.</p> <img src="./red.jpg" alt="...image description"> </section> <section> <h2>Blue Calm</h2> <p>Blue is a calm and cool color, representing tranquility, stability, and trust. It evokes images of the sky and sea, creating a peaceful mood.</p> <img src="./blue.jpg" alt="...image description"> </section> <section> <h2>Yellow Joy</h2> <p>Yellow is a bright and cheerful color, standing for light, optimism, and creativity. It is highly visible and brings a sense of happiness and hope.</p> <img src="./yellow.jpg" alt="...image description"> </section> </main> As for the styling, I’m not doing anything special at this stage, just the basics. I changed the font and adjusted the text and heading sizes, set up the display for the main and the sections, and fixed the image sizes with object-fit. So, at this point, we have a simple site with static, semantic, and accessible content, which is great. Now the goal is to make sure it stays that way as we start adding our effect. The Second First Heading We’ll start by adding another h1 element at the top of the main. This new element will serve as the placeholder for our animated text, updating according to the user’s scroll position. And yes, I know there’s already an h1 in the first section; that’s fine and we’ll address it in a moment so that only one is accessible at a time. <h1 class="scrollDrivenHeading" aria-hidden="true">Primary Colors</h1> Notice that I’ve added aria-hidden="true" to this heading, so it won’t be picked up by screen readers. Now I can add a class specifically for screen readers, .srOnly, to all the other headings. This way, anyone viewing the content “normally” will see only the animated heading, while assistive technology users will get the regular, static semantic headings. Note: The style for the .srOnly class is based on “Inclusively Hidden” by Scott O’Hara. Handling Support As much as accessibility matters, there’s another concern we need to keep in mind: support. CSS Scroll-Driven Animations are fantastic, but they’re still not fully supported everywhere. That’s why it’s important to provide the static version for browsers that don’t support SDA. The first step is to hide the animated heading we just added using display: none. Then, we’ll add a new @supports block to check for SDA support. Inside that block, where SDA is supported, we can change back the display for the heading. The .srOnly class should also move into the @supports block, since we only want it to apply when the effect is active, not when it’s not supported. This way, just like with assistive technology, anyone visiting the page in a browser without SDA support will still get the static content. .scrollDrivenHeading { display: none; } @supports (animation-timeline: scroll()) { .scrollDrivenHeading { display: block; } /* Screen Readers Only */ .srOnly { clip: rect(0 0 0 0); clip-path: inset(50%); height: 1px; overflow: hidden; position: absolute; white-space: nowrap; width: 1px; } } Get Sticky The next thing we need to do is handle the stickiness of the heading. To make sure the heading always stays on screen, we’ll set its position to sticky with top: 0 so it sticks to the top of the viewport. While we’re at it, let’s add some basic styling, including a background so the text doesn’t blend with whatever’s behind the heading, a bit of padding for spacing, and white-space: nowrap to keep the heading on a single line. /* inside the @supports block */ .scrollDrivenHeading { display: block; position: sticky; top: 0; background-image: linear-gradient(0deg, transparent, black 1em); padding: 0.5em 0.25em; white-space: nowrap; } Now everything’s set up: in normal conditions, we’ll see a single sticky heading at the top of the page. And if someone uses assistive technology or a browser that doesn’t support SDA, they’ll still get the regular static content. Now we’re ready to start animating the text. Almost… The Magic Numbers To build the text animation, we need to know exactly where the text should change. With SDA, scrolling basically becomes our timeline, and we have to determine the exact points on that timeline to trigger the animation. To make this easier, and to help you pinpoint those positions, I’ve prepared the following script: @property --scroll-position { syntax: "<number>"; inherits: false; initial-value: 0; } body::after { counter-reset: sp var(--scroll-position); content: counter(sp) "%"; position: fixed; top: 0; left: 0; padding: 1em; background-color: maroon; animation: scrollPosition steps(100); animation-timeline: scroll(); } @keyframes scrollPosition { 0% { --scroll-position: 0; } 100% { --scroll-position: 100; } } I don’t want to get too deep into this code, but the idea is to take the same scroll timeline we’ll use next to animate the text, and use it to animate a custom property (--scroll-position) from 0 to 100 based on the scroll progress, and display that value in the content. If we’ll add this at the start of our code, we’ll see a small red square in the top-left corner of the screen, showing the current scroll position as a percentage (to match the keyframes). This way, you can scroll to any section you want and easily mark the percentage where each heading should begin. With this method and a bit of trial and error, I found that I want the headings to change at 30%, 60%, and 90%. So, how do we actually do it? Let’s start animating. Animating Text First, we’ll clear out the content inside the .scrollDrivenHeading element so it’s empty and ready for dynamic content. In the CSS, I’ll add a pseudo-element to the heading, which we’ll use to animate the text. We’ll give it empty content, set up the animation-name, and of course, assign the animation-timeline to scroll(). And since I’m animating the content property, which is a discrete type, it doesn’t transition smoothly between values. It just jumps from one to the next. By setting the animation-timing-function property to step-end, I make sure each change happens exactly at the keyframe I define, so the text switches precisely where I want it to, instead of somewhere in between. .scrollDrivenHeading { /* style */ &::after { content: ''; animation-name: headingContent; animation-timing-function: step-end; animation-timeline: scroll(); } } As for the keyframes, this part is pretty straightforward (for now). We’ll set the first frame (0%) to the first heading, and assign the other headings to the percentages we found earlier. @keyframes headingContent { 0% { content: 'Primary Colors'} 30% { content: 'Red Power'} 60% { content: 'Blue Calm'} 90%, 100% { content: 'Yellow Joy'} } So, now we’ve got a site with a sticky heading that updates as you scroll. But wait, right now it just switches instantly. Where’s the animation?! Here’s where it gets interesting. Since we’re not using JavaScript or any string manipulation, we have to write the keyframes ourselves. The best approach is to start from the target heading you want to reach, and build backwards. So, if you want to animate between the first and second heading, it would look like this: @keyframes headingContent { 0% { content: 'Primary Colors'} 9% { content: 'Primary Color'} 10% { content: 'Primary Colo'} 11% { content: 'Primary Col'} 12% { content: 'Primary Co'} 13% { content: 'Primary C'} 14% { content: 'Primary '} 15% { content: 'Primary'} 16% { content: 'Primar'} 17% { content: 'Prima'} 18% { content: 'Prim'} 19% { content: 'Pri'} 20% { content: 'Pr'} 21% { content: 'P'} 22% { content: 'R'} 23% { content: 'Re'} 24% { content: 'Red'} 25% { content: 'Red '} 26% { content: 'Red P'} 27% { content: 'Red Po'} 28%{ content: 'Red Pow'} 29% { content: 'Red Powe'} 30% { content: 'Red Power'} 60% { content: 'Blue Calm'} 90%, 100% { content: 'Yellow Joy'} } I simply went back by 1% each time, removing or adding a letter as needed. Note that in other cases, you might want to use a different step size, and not always 1%. For example, on longer headings with more words, you’ll probably want smaller steps. If we repeat this process for all the other headings, we’ll end up with a fully animated heading. User Preferences We talked before about accessibility and making sure the content works well with assistive technology, but there’s one more thing you should keep in mind: prefers-reduced-motion. Even though this isn’t a strict WCAG requirement for this kind of animation, it can make a big difference for people with vestibular sensitivities, so it’s a good idea to offer a way to show the content without animations. If you want to provide a non-animated alternative, all you need to do is wrap your @supports block with a prefers-reduced-motion query: @media screen and (prefers-reduced-motion: no-preference) { @supports (animation-timeline: scroll()) { /* style */ } } Leveling Up Let’s talk about variations. In the previous example, we animated the entire heading text, but we don’t have to do that. You can animate just the part you want, and use additional animations to enhance the effect and make things more interesting. For example, here I kept the text “Primary Color” fixed, and added a span after it that handles the animated text. <h1 class="scrollDrivenHeading" aria-hidden="true"> Primary Color<span></span> </h1> And since I now have a separate span, I can also animate its color to match each value. In the next example, I kept the text animation on the span, but instead of changing the text color, I added another scroll-driven animation on the heading itself to change its background color. This way, you can add as many animations as you want and change whatever you like. Your Turn! CSS Scroll-Driven Animations are more than just a cool trick; they’re a game-changer that opens the door to a whole new world of web design. With just a bit of creativity, you can turn even the most ordinary pages into something interactive, memorable, and truly engaging. The possibilities really are endless, from subtle effects that enhance the user experience, to wild, animated transitions that make your site stand out. So, what would you build with scroll-driven animations? What would you create with this new superpower? Try it out, experiment, and if you come up with something cool, have some ideas, wild experiments, or even weird failures, I’d love to hear about them. I’m always excited to see what others come up with, so feel free to share your work, questions, or feedback below. Special thanks to Cristian Díaz for reviewing the examples, making sure everything is accessible, and contributing valuable advice and improvements. Scroll-Driven Sticky Heading originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The Layout Maestro Course
- Links
- education
- layout
Layout. It’s one of those easy-to-learn, difficult-to-master things, like they say about playing bass. Not because it’s innately difficult to, say, place two elements next to each other, but because there are many, many ways to tackle it. And …
The Layout Maestro Course originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Layout. It’s one of those easy-to-learn, difficult-to-master things, like they say about playing bass. Not because it’s innately difficult to, say, place two elements next to each other, but because there are many, many ways to tackle it. And layout is one area of CSS that seems to evolve more than others, as we’ve seen in the past 10-ish years with the Flexbox, CSS Grid, Subgrid, and now Masonry to name but a few. May as well toss in Container Queries while we’re at it. And reading flow. And… That’s a good way to start talking about a new online course that Ahmad Shadeed is planning to release called The Layout Maestro. I love that name, by the way. It captures exactly how I think about working with layouts: orchestrating how and where things are arranged on a page. Layouts are rarely static these days. They are expected to adapt to the user’s context, not totally unlike a song changing keys. Ahmad is the perfect maestro to lead a course on layout, as he does more than most when it comes to experimenting with layout features and demonstrating practical use cases, as you may have already seen in his thorough and wildly popular interactive guides on Container Queries, grid areas, box alignment, and positioning (just to name a few). The course is still in development, but you can get a leg up and sign up to be notified by email when it’s ready. That’s literally all of the information I have at this point, but I still feel compelled to share it and encourage you to sign up for updates because I know few people more qualified to wax on about CSS layout than Ahmad and am nothing but confident that it will be great, worth the time, and worth the investment. I’m also learning that I have a really hard time typing “maestro” correctly. 🤓 The Layout Maestro Course originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Better CSS Shapes Using shape() — Part 4: Close and Move
- Articles
- art
- clip-path
- CSS functions
- css shapes
The shape()
function's close
and move
commands may not be ones you reach for often, but are incredibly useful for certain shapes.
Better CSS Shapes Using shape() — Part 4: Close and Move originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
This is the fourth post in a series about the new CSS shape() function. So far, we’ve covered the most common commands you will use to draw various shapes, including lines, arcs, and curves. This time, I want to introduce you to two more commands: close and move. They’re fairly simple in practice, and I think you will rarely use them, but they are incredibly useful when you need them. Better CSS Shapes Using shape() Lines and Arcs More on Arcs Curves Close and Move (you are here!) The close command In the first part, we said that shape() always starts with a from command to define the first starting point but what about the end? It should end with a close command. But you never used any close command in the previous articles!? That’s true. I never did because I either “close” the shape myself or rely on the browser to “close” it for me. Said like that, it’s a bit confusing, but let’s take a simple example to better understand: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%) If you try this code, you will get a triangle shape, but if you look closely, you will notice that we have only two line commands whereas, to draw a triangle, we need a total of three lines. The last line between 100% 100% and 0 0 is implicit, and that’s the part where the browser is closing the shape for me without having to explicitly use a close command. I could have written the following: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, close) Or instead, define the last line by myself: clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, line to 0 0) But since the browser is able to close the shape alone, there is no need to add that last line command nor do we need to explicitly add the close command. This might lead you to think that the close command is useless, right? It’s true in most cases (after all, I have written three articles about shape() without using it), but it’s important to know about it and what it does. In some particular cases, it can be useful, especially if used in the middle of a shape. In this example, my starting point is the center and the logic of the shape is to draw four triangles. In the process, I need to get back to the center each time. So, instead of writing line to center, I simply write close and the browser will automatically get back to the initial point! Intuitively, we should write the following: clip-path: shape( from center, line to 20% 0, hline by 60%, line to center, /* triangle 1 */ line to 100% 20%, vline by 60%, line to center, /* triangle 2 */ line to 20% 100%, hline by 60%, line to center, /* triangle 3 */ line to 0 20%, vline by 60% /* triangle 4 */ ) But we can optimize it a little and simply do this instead: clip-path: shape( from center, line to 20% 0, hline by 60%, close, line to 100% 20%, vline by 60%, close, line to 20% 100%, hline by 60%, close, line to 0 20%, vline by 60% ) We write less code, sure, but another important thing is that if I update the center value with another position, the close command will follow that position. Don’t forget about this trick. It can help you optimize a lot of shapes by writing less code. The move command Let’s turn our attention to another shape() command you may rarely use, but can be incredibly useful in certain situations: the move command. Most times when we need to draw a shape, it’s actually one continuous shape. But it may happen that our shape is composed of different parts not linked together. In these situations, the move command is what you will need. Let’s take an example, similar to the previous one, but this time the triangles don’t touch each other: Intuitively, we may think we need four separate elements, with its own shape() definition. But the that example is a single shape! The trick is to draw the first triangle, then “move” somewhere else to draw the next one, and so on. The move command is similar to the from command but we use it in the middle of shape(). clip-path: shape( from 50% 40%, line to 20% 0, hline by 60%, close, /* triangle 1 */ move to 60% 50%, line to 100% 20%, vline by 60%, close, /* triangle 2 */ move to 50% 60%, line to 20% 100%, hline by 60%, close, /* triangle 3 */ move to 40% 50%, line to 0 20%, vline by 60% /* triangle 4 */ ) After drawing the first triangle, we “close” it and “move” to a new point to draw the next triangle. We can have multiple shapes using a single shape() definition. A more generic code will look like the below: clip-path: shape( from X1 Y1, ..., close, /* shape 1 */ move to X2 Y2, ..., close, /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) The close commands before the move commands aren’t mandatory, so the code can be simplified to this: clip-path: shape( from X1 Y1, ..., /* shape 1 */ move to X2 Y2, ..., /* shape 2 */ ... move to Xn Yn, ... /* shape N */ ) Let’s look at a few interesting use cases where this technique can be helpful. Cut-out shapes Previously, I shared a trick on how to create cut-out shapes using clip-path: polygon(). Starting from any kind of polygon, we can easily invert it to get its cut-out version: We can do the same using shape(). The idea is to have an intersection between the main shape and the rectangle shape that fits the element boundaries. We need two shapes, hence the need for the move command. The code is as follows: .shape { clip-path: shape(from ...., move to 0 0, hline to 100%, vline to 100%, hline to 0); } You start by creating your main shape and then you “move” to 0 0 and you create the rectangle shape (Remember, It’s the first shape we create in the first part of this series). We can even go further and introduce a CSS variable to easily switch between the normal shape and the inverted one. .shape { clip-path: shape(from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } By default, --i is not defined so var(--i,)will be empty and we get the main shape. If we define the variable with the rectangle shape, we get the inverted version. Here is an example using a rounded hexagon shape: In reality, the code should be as follows: .shape { clip-path: shape(evenodd from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; } Notice the evenodd I am adding at the beginning of shape(). I won’t bother you with a detailed explanation on what it does but in some cases, the inverted shape is not visible and the fix is to add evenodd at the beginning. You can check the MDN page for more details. Another improvement we can do is to add a variable to control the space around the shape. Let’s suppose you want to make the hexagon shape of the previous example smaller. It‘s tedious to update the code of the hexagon but it’s easier to update the code of the rectangle shape. .shape { clip-path: shape(evenodd from ... var(--i,)) content-box; } .invert { --d: 20px; padding: var(--d); --i: ,move to calc(-1*var(--d)) calc(-1*var(--d)), hline to calc(100% + var(--d)), vline to calc(100% + var(--d)), hline to calc(-1*var(--d)); } We first update the reference box of the shape to be content-box. Then we add some padding which will logically reduce the area of the shape since it will no longer include the padding (nor the border). The padding is excluded (invisible) by default and here comes the trick where we update the rectangle shape to re-include the padding. That is why the --i variable is so verbose. It uses the value of the padding to extend the rectangle area and cover the whole element as if we didn’t have content-box. Not only you can easily invert any kind of shape, but you can also control the space around it! Here is another demo using the CSS-Tricks logo to illustrate how easy the method is: This exact same example is available in my SVG-to-CSS converter, providing you with the shape() code without having to do all of the math. Repetitive shapes Another interesting use case of the move command is when we need to repeat the same shape multiple times. Do you remember the difference between the by and the to directives? The by directive allows us to define relative coordinates considering the previous point. So, if we create our shape using only by, we can easily reuse the same code as many times as we want. Let’s start with a simple example of a circle shape: clip-path: shape(from X Y, arc by 0 -50px of 1%, arc by 0 50px of 1%) Starting from X Y, I draw a first arc moving upward by 50px, then I get back to X Y with another arc using the same offset, but downward. If you are a bit lost with the syntax, try reviewing Part 1 to refresh your memory about the arc command. How I drew the shape is not important. What is important is that whatever the value of X Y is, I will always get the same circle but in a different position. Do you see where I am going with this idea? If I want to add another circle, I simply repeat the same code with a different X Y. clip-path: shape( from X1 Y1, arc by 0 -50px of 1%, arc by 0 50px of 1%, move to X2 Y2, arc by 0 -50px of 1%, arc by 0 50px of 1% ) And since the code is the same, I can store the circle shape into a CSS variable and draw as many circles as I want: .shape { --sh:, arc by 0 -50px of 1%, arc by 0 50px of 1%; clip-path: shape( from X1 Y1 var(--sh), move to X2 Y2 var(--sh), ... move to Xn Yn var(--sh) ) } You don’t want a circle? Easy, you can update the --sh variable with any shape you want. Here is an example with three different shapes: And guess what? You can invert the whole thing using the cut-out technique by adding the rectangle shape at the end: This code is a perfect example of the shape() function’s power. We don’t have any code duplication and we can simply adjust the shape with CSS variables. This is something we are unable to achieve with the path() function because it doesn’t support variables. Conclusion That’s all for this fourth installment of our series on the CSS shape() function! We didn’t make any super complex shapes, but we learned how two simple commands can open a lot of possibilities of what can be done using shape(). Just for fun, here is one more demo recreating a classic three-dot loader using the last technique we covered. Notice how much further we could go, adding things like animation to the mix: Better CSS Shapes Using shape() Lines and Arcs More on Arcs Curves Close and Move (you are here!) Better CSS Shapes Using shape() — Part 4: Close and Move originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The Gap Strikes Back: Now Stylable
- Articles
- flexbox
- grid
- layout
- multi-column layout
Styling the space between layout items — the gap — has typically required some clever workarounds. But a new CSS feature changes all that with just a few simple CSS properties that make it easy, yet also flexible, to display styled separators between your layout items.
The Gap Strikes Back: Now Stylable originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Four years ago, I wrote an article titled Minding the “gap”, where I talked about the CSS gap property, where it applied, and how it worked with various CSS layouts. At the time, I described how easy it was to evenly space items out in a flex, grid, or multi-column layout, by using the gap property. But, I also said that styling the gap areas was much harder, and I shared a workaround. However, workarounds like using extra HTML elements, pseudo-elements, or borders to draw separator lines tend to come with drawbacks, especially those that impact your layout size, interfere with assistive technologies, or pollute your markup with style-only elements. Today, I’m writing again about layout gaps, but this time, to tell you all about a new and exciting CSS feature that’s going to change it all. What you previously had to use workarounds for, you’ll soon be able to do with just a few simple CSS properties that make it easy, yet also flexible, to display styled separators between your layout items. There’s already a specification draft for the feature you can peruse. At the time I’m writing this, it is available in Chrome and Edge 139 behind a flag. But I believe it won’t be long before we turn that flag on. I believe other browsers are also very receptive and engaged. Displaying decorative lines between items of a layout can make a big difference. When used well, these lines can bring more structure to your layout, and give your users more of a sense of how the different regions of a page are organized. Introducing CSS gap decorations If you’ve ever used a multi-column layout, such as by using the column-width property, then you might already be familiar with gap decorations. You can draw vertical lines between the columns of a multi-column layout by using the column-rule property: article { column-width: 20rem; column-rule: 1px solid black; } The CSS gap decorations feature builds on this to provide a more comprehensive system that makes it easy for you to draw separator lines in other layout types. For example, the draft specification says that the column-rule property also works in flexbox and grid layouts: .my-grid-container { display: grid; gap: 2px; column-rule: 2px solid pink; } No need for extra elements or borders! The key benefit here is that the decoration happens in CSS only, where it belongs, with no impacts to your semantic markup. The CSS gap decorations feature also introduces a new row-rule property for drawing lines between rows: .my-flex-container { display: flex; gap: 10px; row-rule: 10px dotted limegreen; column-rule: 5px dashed coral; } But that’s not all, because the above syntax also allows you to define multiple, comma-separated, line style values, and use the same repeat() function that CSS grid already uses for row and column templates. This makes it possible to define different styles of line decorations in a single layout, and adapt to an unknown number of gaps: .my-container { display: grid; gap: 2px; row-rule: repeat(2, 1px dashed red), 2px solid black, repeat(auto, 1px dotted green); } Finally, the CSS gap decorations feature comes with additional CSS properties such as row-rule-break, column-rule-break, row-rule-outset, column-rule-outset, and gap-rule-paint-order, which make it possible to precisely customize the way the separators are drawn, whether they overlap, or where they start and end. And of course, all of this works across grid, flexbox, multi-column, and soon, masonry! Browser support Currently, the CSS gap decorations feature is only available in Chromium-based browsers. The feature is still early in the making, and there’s time for you all to try it and to provide feedback that could help make the feature better and more adapted to your needs. If you want to try the feature today, make sure to use Edge or Chrome, starting with version 139 (or another Chromium-based browser that matches those versions), and enable the flag by following these steps: In Chrome or Edge, go to about://flags. In the search field, search for Enable Experimental Web Platform Features. Enable the flag. Restart the browser. To put this all into practice, let’s walk through an example together that uses the new CSS gap decorations feature. I also have a final example you can demo. Using CSS gap decorations Let’s build a simple web page to learn how to use the feature. Here is what we’ll be building: The above layout contains a header section with a title, a navigation menu with a few links, a main section with a series of short paragraphs of text and photos, and a footer. We’ll use the following markup: <body> <header> <h1>My personal site</h1> </header> <nav> <ul> <li><a href="#">Home</a></li> <li><a href="#">Blog</a></li> <li><a href="#">About</a></li> <li><a href="#">Links</a></li> </ul> </nav> <main> <article> <p>...</p> </article> <article> <img src="cat.jpg" alt="A sleeping cat."> </article> <article> <p>...</p> </article> <article> <img src="tree.jpg" alt="An old olive tree trunk."> </article> <article> <p>...</p> </article> <article> <p>...</p> </article> <article> <p>...</p> </article> <article> <img src="strings.jpg" alt="Snow flakes falling in a motion blur effect."> </article> </main> <footer> <p>© 2025 Patrick Brosset</p> </footer> </body> We’ll start by making the <body> element be a grid container. This way, we can space out the <header>, <nav>, <main>, and <footer> elements apart in one go by using the gap property: body { display: grid; gap: 4rem; margin: 2rem; } Let’s now use the CSS gap decorations feature to display horizontal separator lines within the gaps we just defined: body { display: grid; gap: 4rem; margin: 2rem; row-rule: 1rem solid #efefef; } This gives us the following result: We can do a bit better by making the first horizontal line look different than the other two lines, and simplify the row-rule value by using the repeat() syntax: body { display: grid; gap: 4rem; margin: 2rem; row-rule: 1rem solid #efefef, repeat(2, 2px solid #efefef); } With this new row-rule property value, we’re telling the browser to draw the first horizontal separator as a 1rem thick line, and the next two separators as 2px thick lines, which gives the following result: Now, let’s turn our attention to the navigation element and its list of links. We’ll use flexbox to display the links in a single row, where each link is separated from the other links by a gap and a vertical line: nav ul { display: flex; flex-wrap: wrap; gap: 2rem; column-rule: 2px dashed #666; } Very similarly to how we used the row-rule property before, we’re now using the column-rule property to display a dashed 2px thick separator between the links. Our example web page now looks like this: The last thing we need to change is the <main> element and its paragraphs and pictures. We’ll use flexbox again and display the various children in a wrapping row of varying width items: main { display: flex; flex-wrap: wrap; gap: 4rem; } main > * { flex: 1 1 200px; } main article:has(p) { flex-basis: 400px; } In the above code snippet, we’re setting the <main> element to be a wrapping flex container with a 4rem gap between items and flex lines. We’re also making the items have a flex basis size of 200px for pictures and 400px for text, and allowing them to grow and shrink as needed. This gives us the following result: Let’s use CSS gap decorations to bring a little more structure to our layout by drawing 2px thick separator lines between the rows and columns of the layout: main { display: flex; flex-wrap: wrap; gap: 4rem; row-rule: 2px solid #999; column-rule: 2px solid #999; } This gives us the following result, which is very close to our expected design: The last detail we want to change is related to the vertical lines. We don’t want them to span across the entire height of the flex lines but instead start and stop where the content starts and stops. With CSS gap decorations, we can easily achieve this by using the column-rule-outset property to fine-tune exactly where the decorations start and end, relative to the gap area: main { display: flex; flex-wrap: wrap; gap: 4rem; row-rule: 2px solid #999; column-rule: 2px solid #999; column-rule-outset: 0; } The column-rule-outset property above makes the vertical column separators span the height of each row, excluding the gap area, which is what we want: And with that, we’re done with our example. Check out the live example, and source code. Learn more There’s more to the feature and I mentioned a couple more CSS properties earlier gap-rule-paint-order, which lets you control which of the decorations, rows or columns, appear above the other ones. row-rule-break / column-rule-break, which sets the behavior of the decoration lines at intersections. In particular, whether they are made of multiple segments, which start and end at intersections, or single, continuous lines. Because the feature is new, there isn’t MDN documentation about it yet. So to learn more, check out: CSS Gap Decorations Module Level 1 (First Public Working Draft) Microsoft Edge Explainer The Edge team has also created an interactive playground where you can use visual controls to configure gap decorations. And, of course, the reason this is all implemented behind a flag is to elicit feedback from developers like you! If you have any feedback, questions, or bugs about this feature, I definitely encourage you to open a new ticket on the Chromium issue tracker. The Gap Strikes Back: Now Stylable originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Using CSS Cascade Layers With Tailwind Utilities
- Articles
- cascade layers
- framework
- tailwind
Being the bad boy I am, I don't take Tailwind's default approach to cascade layers as the "best" one. Over a year experimenting with Tailwind and vanilla CSS, I've come across what I believe is a better solution.
Using CSS Cascade Layers With Tailwind Utilities originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Adam Wathan has (very cleverly) built Tailwind with CSS Cascade Layers, making it extremely powerful for organizing styles by priority. @layer theme, base, components, utilities; @import 'tailwindcss/theme.css' layer(theme); @import 'tailwindcss/utilities.css' layer(utilities); The core of Tailwind are its utilities. This means you have two choices: The default choice The unorthodox choice The default choice The default choice is to follow Tailwind’s recommended layer order: place components first, and Tailwind utilities last. So, if you’re building components, you need to manually wrap your components with a @layer directive. Then, overwrite your component styles with Tailwind, putting Tailwind as the “most important layer”. /* Write your components */ @layer components { .component { /* Your CSS here */ } } <!-- Override with Tailwind utilities --> <div class="component p-4"> ... </div> That’s a decent way of doing things. But, being the bad boy I am, I don’t take the default approach as the “best” one. Over a year of (major) experimentation with Tailwind and vanilla CSS, I’ve come across what I believe is a better solution. The Unorthodox Choice Before we go on, I have to tell you that I’m writing a course called Unorthodox Tailwind — this shows you everything I know about using Tailwind and CSS in synergistic ways, leveraging the strengths of each. Shameless plug aside, let’s dive into the Unorthodox Choice now. In this case, the Unorthodox Choice is to write your styles in an unnamed layer — or any layer after utilities, really — so that your CSS naturally overwrites Tailwind utilities. Of these two, I prefer the unnamed layer option: /* Unnamed layer option */ @layer theme, base, components, utilities; /* Write your CSS normally here */ .component { /* ... */ } /* Named layer option */ /* Use whatever layer name you come up with. I simply used css here because it made most sense for explaining things */ @layer theme, base, components, utilities, css; @layer css { .component { /* ... */ } } I have many reasons why I do this: I don’t like to add unnecessary CSS layers because it makes code harder to write — more keystrokes, having to remember the specific layer I used it in, etc. I’m pretty skilled with ITCSS, selector specificity, and all the good-old-stuff you’d expect from a seasoned front-end developer, so writing CSS in a single layer doesn’t scare me at all. I can do complex stuff that are hard or impossible to do in Tailwind (like theming and animations) in CSS. Your mileage may vary, of course. Now, if you have followed my reasoning so far, you would have noticed that I use Tailwind very differently: Tailwind utilities are not the “most important” layer. My unnamed CSS layer is the most important one. I do this so I can: Build prototypes with Tailwind (quickly, easily, especially with the tools I’ve created). Shift these properties to CSS when they get more complex — so I don’t have to read messy utility-littered HTML that makes my heart sink. Not because utility HTML is bad, but because it takes lots of brain processing power to figure out what’s happening. Finally, here’s the nice thing about Tailwind being in a utility layer: I can always !important a utility to give it strength. <!-- !important the padding utility --> <div class="component !p-4"> ... </div> Whoa, hold on, wait a minute! Isn’t this wrong, you might ask? Nope. The !important keyword has traditionally been used to override classes. In this case, we’re leveraging on the !important feature in CSS Layers to say the Tailwind utility is more important than any CSS in the unnamed layer. This is perfectly valid and is a built-in feature for CSS Layers. Besides, the !important is so explicit (and used so little) that it makes sense for one-off quick-and-dirty adjustments (without creating a brand new selector for it). Tailwind utilities are more powerful than they seem Tailwind utilities are not a 1:1 map between a class and a CSS property. Built-in Tailwind utilities mostly look like this so it can give people a wrong impression. Tailwind utilities are more like convenient Sass mixins, which means we can build effective tools for layouts, theming, typography, and more, through them. You can find out about these thoughts inside Unorthodox Tailwind. Thanks for reading and I hope you’re enjoying a new way of looking at (or using) Tailwind! Using CSS Cascade Layers With Tailwind Utilities originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Blob Recipes
- Articles
- art
- blobs
- css shapes
Blob, Blob, Blob. What's the most effective way to create blob shapes in CSS? Turns out, as always, there are many. Let's compare them together!
CSS Blob Recipes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Blob, Blob, Blob. You hate them. You love them. Personally, as a design illiterate, I like to overuse them… a lot. And when you repeat the same process over and over again, it’s only a question of how much you can optimize it, or in this case, what’s the easiest way to create blobs in CSS? Turns out, as always, there are many approaches. To know if our following blobs are worth using, we’ll need them to pass three tests: They can be with just a single element (and preferably without pseudos). They can be easily designed (ideally through an online tool). We can use gradient backgrounds, borders, shadows, and other CSS effects on them. Without further ado, let’s Blob, Blob, Blob right in. Just generate them online I know it’s disenchanting to click on an article about making blobs in CSS just for me to say you can generate them outside CSS. Still, it’s probably the most common way to create blobs on the web, so to be thorough, these are some online tools I’ve used before to create SVG blobs. Haikei. Probably the one I have used the most since, besides blobs, it can also generate lots of SVG backgrounds. Blobmaker. A dedicated tool for making blobs. It’s apparently part of Haikei now, so you can use both. Lastly, almost all graphic programs let you hand-draw blobs and export them as SVGs. For example, this is one I generated just now. Keep it around, as it will come in handy later. <svg viewBox="0 0 200 200" xmlns="http://www.w3.org/2000/svg"> <path fill="#FA4D56" d="M65.4,-37.9C79.2,-13.9,81,17,68.1,38C55.2,59.1,27.6,70.5,1.5,69.6C-24.6,68.8,-49.3,55.7,-56,38.2C-62.6,20.7,-51.3,-1.2,-39,-24.4C-26.7,-47.6,-13.3,-72,6.2,-75.6C25.8,-79.2,51.6,-62,65.4,-37.9Z" transform="translate(100 100)" /> </svg> Using border-radius While counterintuitive, we can use the border-radius property to create blobs. This technique isn’t new by any means; it was first described by Nils Binder in 2018, but it is still fairly unknown. Even for those who use it, the inner workings are not entirely clear. To start, you may know the border-radius is a shorthand to each individual corner’s radius, going from the top left corner clockwise. For example, we can set each corner’s border-radius to get a bubbly square shape: <div class="blob"></div> .blob { border-radius: 25% 50% 75% 100%; } However, what border-radius does — and also why it’s called “radius” — is to shape each corner following a circle of the given radius. For example, if we set the top left corner to 25%, it will follow a circle with a radius 25% the size of the shape. .blob { border-top-left-radius: 25%; } What’s less known is that each corner property is still a shortcut towards its horizontal and vertical radii. Normally, you set both radii to the same value, getting a circle, but you can set them individually to create an ellipse. For example, the following sets the horizontal radius to 25% of the element’s width and the vertical to 50% of its height: .blob { border-top-left-radius: 25% 50%; } We can now shape each corner like an ellipse, and it is the combination of all four ellipses that creates the illusion of a blob! Just take into consideration that to use the horizontal and vertical radii syntax through the border-radius property, we’ll need to separate the horizontal from the vertical radii using a forward slash (/). .blob { border-radius: /* horizontal */ 100% 30% 60% 70% / /* vertical */ 50% 40% 70% 70%; } The syntax isn’t too intuitive, so designing a blob from scratch will likely be a headache. Luckily, Nils Binder made a tool exactly for that! Blobbing blobs together This hack is awesome. We aren’t supposed to use border-radius like that, but we still do. Admittedly, we are limited to boring blobs. Due to the nature of border-radius, no matter how hard we try, we will only get convex shapes. Just going off border-radius, we can try to minimize it a little by sticking more than one blob together: However, I don’t want to spend too much time on this technique since it is too impractical to be worth it. To name a few drawbacks: We are using more than one element or, at the very least, an extra pseudo-element. Ideally, we want to keep it to one element. We don’t have a tool to prototype our blobby amalgamations, so making one is a process of trial and error. We can’t use borders, gradients, or box shadows since they would reveal the element’s outlines. Multiple backgrounds and SVG filters This one is an improvement in the Gooey Effect, described here by Lucas Bebber, although I don’t know who first came up with it. In the original effect, several elements can be morphed together like drops of liquid sticking to and flowing out of each other: It works by first blurring shapes nearby, creating some connected shadows. Then we crank up the contrast, forcing the blur out and smoothly connecting them in the process. Take, for example, this demo by Chris Coyer (It’s from 2014, so more than 10 years ago!): If you look at the code, you’ll notice Chris uses the filter property along the blur() and contrast() functions, which I’ve also seen in other blob demos. To be specific, it applies blur() on each individual circle and then contrast() on the parent element. So, if we have the following HTML: <div class="blob"> <div class="subblob"></div> <div class="subblob"></div> <div class="subblob"></div> </div> …we would need to apply filters and background colors as such: .blob { filter: contrast(50); background: white; /* Solid colors are necessary */ } .subblob { filter: blur(15px); background: black; /* Solid colors are necessary */ } However, there is a good reason why those demos stick to white shapes and black backgrounds (or vice versa) since things get unpredictable once colors aren’t contrast-y enough. See it for yourself in the following demo by changing the color. Just be wary: shades get ugly. To solve this, we will use an SVG filter instead. I don’t want to get too technical on SVG (if you want to, read Luca’s post!). In a nutshell, we can apply blurring and contrast filters using SVGs, but now, we can also pick which color channel we apply the contrast to, unlike normal contrast(), which modifies all colors. Since we want to leave color channels (R, G and B) untouched, we will only crank the contrast up for the alpha channel. That translates to the next SVG filter, which can be embedded in the HTML: <svg xmlns="http://www.w3.org/2000/svg" version="1.1" style="position: absolute;"> <defs> <filter id="blob"> <feGaussianBlur in="SourceGraphic" stdDeviation="12" result="blur" /> <feColorMatrix in="blur" mode="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -6" result="goo" /> <feBlend in="SourceGraphic" in2="blob" /> </filter> </defs> </svg> To apply it, we will use again filter, but this time we’ll set it to url("#blob"), so that it pulls the SVG from the HTML. .blob { filter: url("#blob"); } And now we can even use it with gradient backgrounds! That being said, this approach comes with two small, but important, changes to common CSS filters: The filter is applied to the parent element, not the individual shapes. The parent element must be transparent (which is a huge advantage). To change the background color, we can instead change the body or other ancestors’ background, and it will work with no issues. What’s left is to place the .subblob elements together such that they make a blobby enough shape, then apply the SVG filters to morph them: Making it one element This works well, but it has a similar issue to the blob we made by morphing several border-radius instances: too many elements for a simple blob. Luckily, we can take advantage of the background property to create multiple shapes and morph them together using SVG filters, all in a single element. Since we are keeping it to one element, we will go back to just one empty .blob div: <div class="blob"></div> To recap, the background shorthand can set all background properties and also set multiple backgrounds at once. Of all the properties, we only care about the background-image, background-position and background-size. First, we will use background-image along with radial-gradient() to create a circle inside the element: body { background: radial-gradient(farthest-side, var(--blob-color) 100%, #0000); background-repeat: no-repeat; /* Important! */ } Here is what each parameter does: farthest-side: Confines the shape to the element’s box farthest from its center. This way, it is kept as a circle. var(--blob-color) 100%: Fills the background shape from 0 to 100% with the same color, so it ends up as a solid color. #0000: After the shape is done, it makes a full stop to transparency, so the color ends. The next part is moving and resizing the circle using the background-position and background-size properties. Luckily, both can be set on background after the gradient, separated from each other by a forward slash (/). body { background: radial-gradient(...) 20% 30% / 30% 40%; background-repeat: no-repeat; /* Important! */ } The first pair of percentages sets the shape’s horizontal and vertical position (taking as a reference the top-left corner), while the second pair sets the shape’s width and height (taking as a reference the element’s size). As I mentioned, we can stack up different backgrounds together, which means we can create as many circles/ellipses as we want! For example, we can create three ellipses on the same element: .blob { background: radial-gradient(farthest-side, var(--blob-color) 100%, #0000) 20% 30% / 30% 40%, radial-gradient(farthest-side, var(--blob-color) 100%, #0000) 80% 50% / 40% 60%, radial-gradient(farthest-side, var(--blob-color) 100%, #0000) 50% 70% / 50% 50%; background-repeat: no-repeat; } What’s even better is that SVG filters don’t care whether shapes are made of elements or backgrounds, so we can also morph them together using the last url(#blob) filter! While this method may be a little too much for blobs, it unlocks squishing, stretching, dividing, and merging blobs in seamless animations. Again, all these tricks are awesome, but not enough for what we want! We accomplished reducing the blob to a single element, but we still can’t use gradients, borders, or shadows on them, and also, they are tedious to design and model. Then, that brings us to the ultimate blob approach… Using the shape() function Fortunately, there is a new way to make blobs that just dropped to CSS: the shape() function! I’ll explain shape()‘s syntax briefly, but for an in-depth explanation, you’ll want to check out both this explainer from the CSS-Tricks Almanac as well as Temani Afif‘s three-part series on the shape() function, as well as his recent article about blobs. First off, the CSS shape() function is used alongside the clip-path property to cut elements into any shape we want. More specifically, it uses a verbal version of SVG’s path syntax. The syntax has lots of commands for lots of types of lines, but when blobbing with shape(), we’ll define curves using the curve command: .blob { clip-path: shape( from X0 Y0, curve to X1 Y1 with Xc1 Yc1, curve to X2 Y2 with Xc21 Yc21 / Xc22 Yc22 /* ... */ ); } Let’s break down each parameter: X0 Y0 defines the starting point of the shape. curve starts the curve where X1 Y1 is the next point of the shape, while Xc1 Yc1 defines a control point used in Bézier curves. The next parameter is similar, but we used Xc21 Yc21 / Xc22 Yc22 instead to define two control points on the Bézier curve. I honestly don’t understand Bézier curves and control points completely, but luckily, we don’t need them to use shape() and blobs! Again, shape() uses a verbal version of SVG’s path syntax, so it can draw any shape an SVG can, which means that we can translate the SVG blobs we generated earlier… and CSS-ify them. To do so, we’ll grab the d attribute (which defines the path) from our SVG and paste it into Temani’s SVG to shape() generator. This is the exact code the tool generated for me: .blob { aspect-ratio: 0.925; /* Generated too! */ clip-path: shape( from 91.52% 26.2%, curve to 93.52% 78.28% with 101.76% 42.67%/103.09% 63.87%, curve to 44.11% 99.97% with 83.95% 92.76%/63.47% 100.58%, curve to 1.45% 78.42% with 24.74% 99.42%/6.42% 90.43%, curve to 14.06% 35.46% with -3.45% 66.41%/4.93% 51.38%, curve to 47.59% 0.33% with 23.18% 19.54%/33.13% 2.8%, curve to 91.52% 26.2% with 62.14% -2.14%/81.28% 9.66% ); } As you might have guessed, it returns our beautiful blob: Let’s check if it passes our requirements: Yes, they can be made of a single element. Yes, they can also be created in a generator and then translated into CSS. Yes, we can use gradient backgrounds, but due to the nature of clip-path(), borders and shadows get cut out. Two out of three? Maybe two and a half of three? That’s a big improvement over the other approaches, even if it’s not perfect. Conclusion So, alas, we failed to find what I believe is the perfect CSS approach to blobs. I am, however, amazed how something so trivial designing blobs can teach us about so many tricks and new CSS features, many of which I didn’t know myself. CSS Blob Recipes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
KelpUI
- Links
- framework
KelpUI is new library that Chris Ferdinandi is developing, designed to leverage newer CSS features and Web Components. I've enjoyed following Chris as he's published an ongoing series of articles detailing his thought process behind the library, getting deep into his approach. You really get a clear picture of his strategy and I love it.
KelpUI originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
KelpUI is new library that Chris Ferdinandi is developing, designed to leverage newer CSS features and Web Components. I’ve enjoyed following Chris as he’s published an ongoing series of articles detailing his thought process behind the library, getting deep into his approach. You really get a clear picture of his strategy and I love it. He outlined his principles up front in a post back in April: I’m imagining a system that includes… Base styles for all of the common HTML elements. Loads of utility classes for nudging and tweaking things. Group classes for styling more complex UI elements without a million little classes. Easy customization with CSS variables. Web Components to progressively add interactivity to functional HTML. All of the Web Component HTML lives in the light DOM, so its easy to style and reason about. I’m imagining something that can be loaded directly from a CDN, downloaded locally, or imported if you want to roll your own build. And that’s what I’ve seen so far. The Cascade is openly embraced and logically structured with Cascade Layers. Plenty of utility classes are included, with extra care put into how they are named. Selectors are kept simple and specificity is nice and low, where needed. Layouts are flexible with good constraints. Color palettes are accessible and sport semantic naming. Chris has even put a ton of thought into how KelpUI is licensed. KelpUI is still evolving, and that’s part of the beauty of looking at it now and following Chris’s blog as he openly chronicles his approach. There’s always going to be some opinionated directions in a library like this, but I love that the guiding philosophy is so clear and is being used as a yardstick to drive decisions. As I write this, Chris is openly questioning the way he optimizes the library, demonstrating the tensions between things like performance and a good developer experience. Looks like it’ll be a good system, but even more than that, it’s a wonderful learning journey that’s worth following. KelpUI originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Poking at the CSS if() Function a Little More: Conditional Color Theming
- Articles
- CSS functions
The CSS if()
function enables us to use values conditionally, but what exactly does if()
do? Let's look at a possible real-world use case.
Poking at the CSS if() Function a Little More: Conditional Color Theming originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Chrome 137 shipped the if() CSS function, so it’s totally possible we’ll see other browsers implement it, though it’s tough to know exactly when. Whatever the case, if() enables us to use values conditionally, which we can already do with queries and other functions (e.g., media queries and the light-dark() function), so I’m sure you’re wondering: What exactly does if() do? Sunkanmi gave us a nice overview of the function yesterday, poking at the syntax at a high level. I’d like to poke at it a little harder in this article, getting into some possible real-world usage. To recap, if() conditionally assigns a value to a property based on the value of a CSS variable. For example, we could assign different values to the color and background properties based on the value of --theme: --theme: "Shamrock" color: hsl(146 50% 3%) background: hsl(146 50% 40%) --theme: Anything else color: hsl(43 74% 3%) background: hsl(43 74% 64%) :root { /* Change to fall back to the ‘else’ values */ --theme: "Shamrock"; body { color: if(style(--theme: "Shamrock"): hsl(146 50% 3%); else: hsl(43 74% 3%)); background: if(style(--theme: "Shamrock"): hsl(146 50% 40%); else: hsl(43 74% 64%)); } } I don’t love the syntax (too many colons, brackets, and so on), but we can format it like this (which I think is a bit clearer): color: if( style(--theme: "Shamrock"): hsl(146 50% 3%); else: hsl(43 74% 3%) ); We should be able to do a crazy number of things with if(), and I hope that becomes the case eventually, but I did some testing and learned that the syntax above is the only one that works. We can’t base the condition on the value of an ordinary CSS property (instead of a custom property), HTML attribute (using attr()), or any other value. For now, at least, the condition must be based on the value of a custom property (CSS variable). Exploring what we can do with if() Judging from that first example, it’s clear that we can use if() for theming (and design systems overall). While we could utilize the light-dark() function for this, what if the themes aren’t strictly light and dark, or what if we want to have more than two themes or light and dark modes for each theme? Well, that’s what if() can be used for. First, let’s create more themes/more conditions: :root { /* Shamrock | Saffron | Amethyst */ --theme: "Saffron"; /* ...I choose you! */ body { color: if( style(--theme: "Shamrock"): hsl(146 50% 3%); style(--theme: "Saffron"): hsl(43 74% 3%); style(--theme: "Amethyst"): hsl(282 47% 3%) ); background: if( style(--theme: "Shamrock"): hsl(146 50% 40%); style(--theme: "Saffron"): hsl(43 74% 64%); style(--theme: "Amethyst"): hsl(282 47% 56%) ); transition: 300ms; } } Pretty simple really, but there are a few easy-to-miss things. Firstly, there’s no “else condition” this time, which means that if the theme isn’t Shamrock, Saffron, or Amethyst, the default browser styles are used. Otherwise, the if() function resolves to the value of the first true statement, which is the Saffron theme in this case. Secondly, transitions work right out of the box; in the demo below, I’ve added a user interface for toggling the --theme, and for the transition, literally just transition: 300ms alongside the if() functions: Note: if theme-swapping is user-controlled, such as selecting an option, you don’t actually need if() at all. You can just use the logic that I’ve used at the beginning of the demo (:root:has(#shamrock:checked) { /* Styles */ }). Amit Sheen has an excellent demonstration over at Smashing Magazine. To make the code more maintainable though, we can slide the colors into CSS variables as well, then use them in the if() functions, then slide the if() functions themselves into CSS variables: /* Setup */ :root { /* Shamrock | Saffron | Amethyst */ --theme: "Shamrock"; /* ...I choose you! */ /* Base colors */ --shamrock: hsl(146 50% 40%); --saffron: hsl(43 74% 64%); --amethyst: hsl(282 47% 56%); /* Base colors, but at 3% lightness */ --shamrock-complementary: hsl(from var(--shamrock) h s 3%); --saffron-complementary: hsl(from var(--saffron) h s 3%); --amethyst-complementary: hsl(from var(--amethyst) h s 3%); --background: if( style(--theme: "Shamrock"): var(--shamrock); style(--theme: "Saffron"): var(--saffron); style(--theme: "Amethyst"): var(--amethyst) ); --color: if( style(--theme: "Shamrock"): var(--shamrock-complementary); style(--theme: "Saffron"): var(--saffron-complementary); style(--theme: "Amethyst"): var(--amethyst-complementary) ); /* Usage */ body { /* One variable, all ifs! */ background: var(--background); color: var(--color); accent-color: var(--color); /* Can’t forget this! */ transition: 300ms; } } As well as using CSS variables within the if() function, we can also nest other functions. In the example below, I’ve thrown light-dark() in there, which basically inverts the colors for dark mode: --background: if( style(--theme: "Shamrock"): light-dark(var(--shamrock), var(--shamrock-complementary)); style(--theme: "Saffron"): light-dark(var(--saffron), var(--saffron-complementary)); style(--theme: "Amethyst"): light-dark(var(--amethyst), var(--amethyst-complementary)) ); if() vs. Container style queries If you haven’t used container style queries before, they basically check if a container has a certain CSS variable (much like the if() function). Here’s the exact same example/demo but with container style queries instead of the if() function: :root { /* Shamrock | Saffron | Amethyst */ --theme: "Shamrock"; /* ...I choose you! */ --shamrock: hsl(146 50% 40%); --saffron: hsl(43 74% 64%); --amethyst: hsl(282 47% 56%); --shamrock-complementary: hsl(from var(--shamrock) h s 3%); --saffron-complementary: hsl(from var(--saffron) h s 3%); --amethyst-complementary: hsl(from var(--amethyst) h s 3%); body { /* Container has chosen Shamrock! */ @container style(--theme: "Shamrock") { --background: light-dark(var(--shamrock), var(--shamrock-complementary)); --color: light-dark(var(--shamrock-complementary), var(--shamrock)); } @container style(--theme: "Saffron") { --background: light-dark(var(--saffron), var(--saffron-complementary)); --color: light-dark(var(--saffron-complementary), var(--saffron)); } @container style(--theme: "Amethyst") { --background: light-dark(var(--amethyst), var(--amethyst-complementary)); --color: light-dark(var(--amethyst-complementary), var(--amethyst)); } background: var(--background); color: var(--color); accent-color: var(--color); transition: 300ms; } } As you can see, where if() facilitates conditional values, container style queries facilitate conditional properties and values. Other than that, it really is just a different syntax. Additional things you can do with if() (but might not realize) Check if a CSS variable exists: /* Hide icons if variable isn’t set */ .icon { display: if( style(--icon-family): inline-block; else: none ); } Create more-complex conditional statements: h1 { font-size: if( style(--largerHeadings: true): xxx-large; style(--theme: "themeWithLargerHeadings"): xxx-large ); } Check if two CSS variables match: /* If #s2 has the same background as #s1, add a border */ #s2 { border-top: if( style(--s2-background: var(--s1-background)): thin solid red ); } if() and calc(): When the math isn’t mathing This won’t work (maybe someone can help me pinpoint why): div { /* 3/3 = 1 */ --calc: calc(3/3); /* Blue, because if() won’t calculate --calc */ background: if(style(--calc: 1): red; else: blue); } To make if() calculate --calc, we’ll need to register the CSS variable using @property first, like this: @property --calc { syntax: "<number>"; initial-value: 0; inherits: false; } Closing thoughts Although I’m not keen on the syntax and how unreadable it can sometimes look (especially if it’s formatted on one line), I’m mega excited to see how if() evolves. I’d love to be able to use it with ordinary properties (e.g., color: if(style(background: white): black; style(background: black): white);) to avoid having to set CSS variables where possible. It’d also be awesome if calc() calculations could be calculated on the fly without having to register the variable. That being said, I’m still super happy with what if() does currently, and can’t wait to build even simpler design systems. Poking at the CSS if() Function a Little More: Conditional Color Theming originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Lightly Poking at the CSS if() Function in Chrome 137
- Articles
- CSS functions
The CSS if()
function was recently implemented in Chrome 137, making it the first instance where we have it supported by a mainstream browser. Let's poke at it a bit at a very high level.
Lightly Poking at the CSS if() Function in Chrome 137 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
We’ve known it for a few weeks now, but the CSS if() function officially shipped in Chrome 137 version. It’s really fast development for a feature that the CSSWG resolved to add less than a year ago. We can typically expect this sort of thing — especially one that is unlike anything we currently have in CSS — to develop over a number of years before we can get our dirty hands on it. But here we are! I’m not here to debate whether if() in CSS should exist, nor do I want to answer whether CSS is a programming language; Chris already did that and definitely explained how exhausting that fun little argument can be. What I am here to do is poke at if() in these early days of support and explore what we know about it today at a pretty high level to get a feel for its syntax. We poke a little harder at it in another post where we’ll look at a more heady real-world example. Yes, it’s already here! Conditional statements exist everywhere in CSS. From at-rules to the parsing and matching of every statement to the DOM, CSS has always had conditionals. And, as Lea Verou put it, every selector is essentially a conditional! What we haven’t had, however, is a way to style an element against multiple conditions in one line, and then have it return a result conditionally. The if() function is a more advanced level of conditionals, where you can manipulate and have all your conditional statements assigned to a single property. .element { color: if(style(--theme: dark): oklch(52% 0.18 140); else: oklch(65% 0.05 220)); } How does if() work? Well before Chrome implemented the feature, back in 2021 when it was first proposed, the early syntax was like this: <if()> = if( <container-query>, [<declaration-value>]{1, 2} ) Now we’re looking at this instead: <if()> = if( [<if-statement>: <result>]*; <if-statement>: <result> ;? ) Where… The first <if-statement> represents conditions inside either style(), media(), or supports() wrapper functions. This allows us to write multiple if statements, as many as we may desire. Yes, you read that right. As many as we want! The final <if-statement> condition (else) is the default value when all other if statements fail. That’s the “easy” way to read the syntax. This is what’s in the spec: <if()> = if( [ <if-branch> ; ]* <if-branch> ;? ) <if-branch> = <if-condition> : <declaration-value>? <if-condition> = <boolean-expr[ <if-test> ]> | else <if-test> = supports( [ <ident> : <declaration-value> ] | <supports-condition> ) media( <media-feature> | <media-condition> ) | style( <style-query> ) A little wordy, right? So, let’s look at an example to wrap our heads around it. Say we want to change an element’s padding depending on a given active color scheme. We would set an if() statement with a style() function inside, and that would compare a given value with something like a custom variable to output a result. All this talk sounds so complicated, so let’s jump into code: .element { padding: if(style(--theme: dark): 2rem; else: 3rem); } The example above sets the padding to 2rem… if the --theme variable is set to dark. If not, it defaults to 3rem. I know, not exactly the sort of thing you might actually use the function for, but it’s merely to illustrate the basic idea. Make the syntax clean! One thing I noticed, though, is that things can get convoluted very very fast. Imagine you have three if() statements like this: :root { --height: 12.5rem; --width: 4rem; --weight: 2rem; } .element { height: if( style(--height: 3rem): 14.5rem; style(--width: 7rem): 10rem; style(--weight: 100rem): 2rem; else: var(--height) ); } We’re only working with three statements and, I’ll be honest, it makes my eyes hurt with complexity. So, I’m anticipating if() style patterns to be developed soon or prettier versions to adopt a formatting style for this. For example, if I were to break things out to be more readable, I would likely do something like this: :root { --height: 12.5rem; --width: 4rem; --weight: 2rem; } /* This is much cleaner, don't you think? */ .element { height: if( style(--height: 3rem): 14.5rem; style(--width: 7rem): 10rem; style(--weight: 100rem): 2rem; else: var(--height) ); } Much better, right? Now, you can definitely understand what is going on at a glance. That’s just me, though. Maybe you have different ideas… and if you do, I’d love to see them in the comments. Here’s a quick demo showing multiple conditionals in CSS for this animated ball to work. The width of the ball changes based on some custom variable values set. Gentle reminder that this is only supported in Chrome 137+ at the time I’m writing this: The supports() and media() statements Think of supports() the same way you would use the @supports at-rule. In fact, they work about the same, at least conceptually: /* formal syntax for @supports */ @supports <supports-condition> { <rule-list> } /* formal syntax for supports() */ supports( [ <ident> : <declaration-value> ] | <supports-condition> ) The only difference here is that supports() returns a value instead of matching a block of code. But, how does this work in real code? The <ident>: <declaration-value> you see here is, in this case, the property name: property value e.g. display: flex. Let’s say you want to check for support for the backdrop-filter property, particularly the blur() function. Typically, you can do this with @supports: /* Fallback in case the browser doesn't support backdrop-filter */ .card { backdrop-filter: unset; background-color: oklch(20% 50% 40% / 0.8); } @supports (backdrop-filter: blur(10px)) { .card { backdrop-filter: blur(10px); background-color: oklch(20% 50% 40% / 0.8); } } But, with CSS if(), we can also do this: .card { backdrop-filter: if( supports(backdrop-filter: blur(10px)): blur(10px); else: unset ); } Note: Think of unset here as a possible fallback for graceful degradation. That looks awesome, right? Multiple conditions can be checked as well for supports() and any of the supported functions. For example: .card { backdrop-filter: if( supports(backdrop-filter: blur(10px)): blur(10px); supports(backdrop-filter: invert(50%)): invert(50%); supports(backdrop-filter: hue-rotate(230deg)): hue-rotate(230deg);; else: unset ); } Now, take a look at the @media at-rule. You can compare and check for a bunch of stuff, but I’d like to keep it simple and check for whether or not a screen size is a certain width and apply styles based on that: h1 { font-size: 2rem; } @media (min-width: 768px) { h1 { font-size: 2.5rem; } } @media (min-width: 1200px) { h1 { font-size: 3rem; } } The media() wrapper works almost the same way as its at-rule counterpart. Note its syntax from the spec: /* formal syntax for @media */ @media <media-query-list> { <rule-list> } /* formal syntax for media() */ media( <media-feature> | <media-condition> ) Notice how at the end of the day, the formal syntax (<media-query>) is the same as the syntax for the media() function. And instead of returning a block of code in @media, you’d have something like this in the CSS inline if(): h1 { font-size: if( media(width >= 1200px): 3rem; media(width >= 768px): 2.5rem; else: 2rem ); } Again, these are early days As of the time of this writing, only the latest update of Chrome supports if()). I’m guessing other browsers will follow suit once usage and interest come in. I have no idea when that will happen. Until then, I think it’s fun to experiment with this stuff, just as others have been doing: The What If Machine: Bringing the “Iffy” Future of CSS into the Present (Lee Meyer) How To Correctly Use if() In CSS (Temani Afif) Future-Proofing Indirect Cyclical Conditions (Roma Komarov) The new if() function in CSS has landed in the latest Chrome (Amit Merchant) Experimenting with early features is how we help CSS evolve. If you’re trying things out, consider adding your feedback to the CSSWG and Chromium. The more use cases, the better, and that will certain help make future implementations better as well. Now that we have a high-level feel for the if()syntax, we’ll poke a little harder at the function in another article where we put it up against a real-world use case. Lightly Poking at the CSS if() Function in Chrome 137 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
A Better API for the Intersection and Mutation Observers
- Articles
- JavaScript
Zell discusses refactoring the Resize, Mutation, and Intersection Observer APIs for easier usage, demonstrating how to implement callback and event listener patterns, while highlighting available options and methods.
A Better API for the Intersection and Mutation Observers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
In a previous article, I showed you how to refactor the Resize Observer API into something way simpler to use: // From this const observer = new ResizeObserver(observerFn) function observerFn (entries) { for (let entry of entries) { // Do something with each entry } } const element = document.querySelector('#some-element') observer.observe(element); // To this const node = document.querySelector('#some-element') const obs = resizeObserver(node, { callback({ entry }) { // Do something with each entry } }) Today, we’re going to do the same for MutationObserver and IntersectionObserver. Refactoring Mutation Observer MutationObserver has almost the same API as that of ResizeObserver. So we can practically copy-paste the entire chunk of code we wrote for resizeObserver to mutationObserver. export function mutationObserver(node, options = {}) { const observer = new MutationObserver(observerFn) const { callback, ...opts } = options observer.observe(node, opts) function observerFn(entries) { for (const entry of entries) { // Callback pattern if (options.callback) options.callback({ entry, entries, observer }) // Event listener pattern else { node.dispatchEvent( new CustomEvent('mutate', { detail: { entry, entries, observer }, }) ) } } } } You can now use mutationObserver with the callback pattern or event listener pattern. const node = document.querySelector('.some-element') // Callback pattern const obs = mutationObserver(node, { callback ({ entry, entries }) { // Do what you want with each entry } }) // Event listener pattern node.addEventListener('mutate', event => { const { entry } = event.detail // Do what you want with each entry }) Much easier! Disconnecting the observer Unlike ResizeObserver who has two methods to stop observing elements, MutationObserver only has one, the disconnect method. export function mutationObserver(node, options = {}) { // ... return { disconnect() { observer.disconnect() } } } But, MutationObserver has a takeRecords method that lets you get unprocessed records before you disconnect. Since we should takeRecords before we disconnect, let’s use it inside disconnect. To create a complete API, we can return this method as well. export function mutationObserver(node, options = {}) { // ... return { // ... disconnect() { const records = observer.takeRecords() observer.disconnect() if (records.length > 0) observerFn(records) } } } Now we can disconnect our mutation observer easily with disconnect. const node = document.querySelector('.some-element') const obs = mutationObserver(/* ... */) obs.disconnect() MutationObserver’s observe options In case you were wondering, MutationObserver’s observe method can take in 7 options. Each one of them determines what to observe, and they all default to false. subtree: Monitors the entire subtree of nodes childList: Monitors for addition or removal children elements. If subtree is true, this monitors all descendant elements. attributes: Monitors for a change of attributes attributeFilter: Array of specific attributes to monitor attributeOldValue: Whether to record the previous attribute value if it was changed characterData: Monitors for change in character data characterDataOldValue: Whether to record the previous character data value Refactoring Intersection Observer The API for IntersectionObserver is similar to other observers. Again, you have to: Create a new observer: with the new keyword. This observer takes in an observer function to execute. Do something with the observed changes: This is done via the observer function that is passed into the observer. Observe a specific element: By using the observe method. (Optionally) unobserve the element: By using the unobserve or disconnect method (depending on which Observer you’re using). But IntersectionObserver requires you to pass the options in Step 1 (instead of Step 3). So here’s the code to use the IntersectionObserver API. // Step 1: Create a new observer and pass in relevant options const options = {/*...*/} const observer = new IntersectionObserver(observerFn, options) // Step 2: Do something with the observed changes function observerFn (entries) { for (const entry of entries) { // Do something with entry } } // Step 3: Observe the element const element = document.querySelector('#some-element') observer.observe(element) // Step 4 (optional): Disconnect the observer when we're done using it observer.disconnect(element) Since the code is similar, we can also copy-paste the code we wrote for mutationObserver into intersectionObserver. When doing so, we have to remember to pass the options into IntersectionObserver and not the observe method. export function mutationObserver(node, options = {}) { const { callback, ...opts } = options const observer = new MutationObserver(observerFn, opts) observer.observe(node) function observerFn(entries) { for (const entry of entries) { // Callback pattern if (options.callback) options.callback({ entry, entries, observer }) // Event listener pattern else { node.dispatchEvent( new CustomEvent('intersect', { detail: { entry, entries, observer }, }) ) } } } } Now we can use intersectionObserver with the same easy-to-use API: const node = document.querySelector('.some-element') // Callback pattern const obs = intersectionObserver(node, { callback ({ entry, entries }) { // Do what you want with each entry } }) // Event listener pattern node.addEventListener('intersect', event => { const { entry } = event.detail // Do what you want with each entry }) Disconnecting the Intersection Observer IntersectionObserver‘s methods are a union of both resizeObserver and mutationObserver. It has four methods: observe: observe an element unobserve: stops observing one element disconnect: stops observing all elements takeRecords: gets unprocessed records So, we can combine the methods we’ve written in resizeObserver and mutationObserver for this one: export function intersectionObserver(node, options = {}) { // ... return { unobserve(node) { observer.unobserve(node) }, disconnect() { // Take records before disconnecting. const records = observer.takeRecords() observer.disconnect() if (records.length > 0) observerFn(records) }, takeRecords() { return observer.takeRecords() }, } } Now we can stop observing with the unobserve or disconnect method. const node = document.querySelector('.some-element') const obs = intersectionObserver(node, /*...*/) // Disconnect the observer obs.disconnect() IntersectionObserver options In case you were wondering, IntersectionObserver takes in three options: root: The element used to check if observed elements are visible rootMargin: Lets you specify an offset amount from the edges of the root threshold: Determines when to log an observer entry Here’s an article to help you understand IntersectionObserver options. Using this in practice via Splendid Labz Splendid Labz has a utils library that contains resizeObserver, mutationObserver and IntersectionObserver. You can use them if you don’t want to copy-paste the above snippets into every project. import { resizeObserver, intersectionObserver, mutationObserver } from 'splendidlabz/utils/dom' const mode = document.querySelector(‘some-element’) const resizeObs = resizeObserver(node, /* ... */) const intersectObs = intersectionObserver(node, /* ... */) const mutateObs = mutationObserver(node, /* ... */) Aside from the code we’ve written together above (and in the previous article), each observer method in Splendid Labz is capable of letting you observe and stop observing multiple elements at once (except mutationObserver because it doesn’t have a unobserve method) const items = document.querySelectorAll('.elements') const obs = resizeObserver(items, { callback ({ entry, entries }) { /* Do what you want here */ } }) // Unobserves two items at once const subset = [items[0], items[1]] obs.unobserve(subset) So it might be just a tad easier to use the functions I’ve already created for you. 😉 Shameless Plug: Splendid Labz contains a ton of useful utilities — for CSS, JavaScript, Astro, and Svelte — that I have created over the last few years. I’ve parked them all in into Splendid Labz, so I no longer need to scour the internet for useful functions for most of my web projects. If you take a look, you might just enjoy what I’ve complied! (I’m still making the docs at the time of writing so it can seem relatively empty. Check back every now and then!) Learning to refactor stuff If you love the way I explained how to refactor the observer APIs, you may find how I teach JavaScript interesting. In my JavaScript course, you’ll learn to build 20 real life components. We’ll start off simple, add features, and refactor along the way. Refactoring is such an important skill to learn — and in here, I make sure you got cement it into your brain. That’s it! Hope you had fun reading this piece! A Better API for the Intersection and Mutation Observers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Color Everything in CSS
- Articles
- color
- CSS functions
An introduction to "Color spaces", "Color models", "Color gamuts," and basically all of the "Color somethings" in CSS.
Color Everything in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
I have had the opportunity to edit over a lot of the new color entries coming to the CSS-Tricks Almanac. We’ve already published several with more on the way, including a complete guide on color functions: color() hsl() lab() lch() oklab() oklch() rgb() And I must admit: I didn’t know a lot about color in CSS (I still used rgb(), which apparently isn’t what cool people do anymore), so it has been a fun learning experience. One of the things I noticed while trying to keep up with all this new information was how long the glossary of color goes, especially the “color” concepts. There are “color spaces,” “color models,” “color gamuts,” and basically a “color” something for everything. They are all somewhat related, and it can get confusing as you dig into using color in CSS, especially the new color functions that have been shipped lately, like contrast-color() and color-mix(). Hence, I wanted to make the glossary I wish I had when I was hearing for the first time about each concept, and that anyone can check whenever they forget what a specific “color” thing is. As a disclaimer, I am not trying to explain color, or specifically, color reproduction, in this post; that would probably be impossible for a mortal like me. Instead, I want to give you a big enough picture for some technicalities behind color in CSS, such that you feel confident using functions like lab() or oklch() while also understanding what makes them special. What’s a color? Let’s slow down first. In order to understand everything in color, we first need to understand the color in everything. While it’s useful to think about an object being a certain color (watch out for the red car, or cut the white cable!), color isn’t a physical property of objects, or even a tangible thing. Yes, we can characterize light as the main cause of color1, but it isn’t until visible light enters our eyes and is interpreted by our brains that we perceive a color. As said by Elle Stone: Light waves are out there in the world, but color happens in the interaction between light waves and the eye, brain, and mind. Even if color isn’t a physical thing, we still want to replicate it as reliably as possible, especially in the digital era. If we take a photo of a beautiful bouquet of lilies (like the one on my desk) and then display it on a screen, we expect to see the same colors in both the image and reality. However, “reality” here is a misleading term since, once again, the reality of color depends on the viewer. To solve this, we need to understand how light wavelengths (something measurable and replicable) create different color responses in viewers (something not so measurable). Luckily, this task was already carried out 95 years ago by the International Commission on Illumination (CIE, by its French name). I wish I could get into the details of the experiment, but we haven’t gotten into our first color thingie yet. What’s important is that from these measurements, the CIE was able to map all the colors visible to the average human (in the experiment) to light wavelengths and describe them with only three values. Initially, those three primary values corresponded to the red, green, and blue wavelengths used in the experiment, and they made up the CIERGB Color Space, but researchers noticed that some colors required a negative wavelength2 to represent a visible color. To avoid that, a series of transformations were performed on the original CIERGB and the resulting color space was called CIEXYZ. This new color space also has three values, X and Z represent the chromaticity of a color, while Y represents its luminance. Since it has three axes, it makes a 3D shape, but if we slice it such that its luminance is the same, we get all the visible colors for a given luminance in a figure you have probably seen before. This is called the xy chromaticity diagram and holds all the colors visible by the average human eye (based on the average viewer in the CIE 1931 experiment). Colors inside the shape are considered real, while those outside are deemed imaginary. Color Spaces The purpose of the last explanation was to reach the CIEXYZ Color Space concept, but what exactly is a “color space”? And why is the CIEXYZ Color Space so important? The CIEXYZ Color Space is a mapping from all the colors visible by the average human eye into a 3D coordinate system, so we only need three values to define a color. Then, a color space can be thought of as a general mapping of color, with no need to include every visible color, and it is usually defined through three values as well. RGB Color Spaces The most well-known color spaces are the RGB color spaces (note the plural). As you may guess from the name, here we only need the amount of red, green, and blue to describe a color. And to describe an RGB color space, we only need to define its “reddest”, “greenest”, and “bluest” values3. If we use coordinates going from 0 to 1 to define a color in the RGB color space, then: (1, 0, 0) means the reddest color. (0, 1, 0) means the greenest color. (0, 0, 1) means the bluest color. However, “reddest”, “bluest”, and “greenest” are only arbitrary descriptions of color. What makes a color the “bluest” is up to each person. For example, which of the following colors do you think is the bluest? As you can guess, something like “bluest” is an appalling description. Luckily, we just have to look back at the CIEXYZ color space — it’s pretty useful! Here, we can define what we consider the reddest, greenest, and bluest colors just as coordinates inside the xy chromaticity diagram. That’s all it takes to create an RGB color space, and why there are so many! Credit: Elle Stone In CSS, the most used color space is the standard RGB (sRGB) color space, which, as you can see in the last image, leaves a lot of colors out. However, in CSS, we can use modern RGB color spaces with a lot more colors through the color() function, such as display-p3, prophoto-rgb, and rec2020. Credit: Chrome Developer Team Notice how the ProPhoto RGB color space goes out of the visible color. This is okay. Colors outside are clamped; they aren’t new or invisible colors. In CSS, besides sRGB, we have two more color spaces: the CIELAB color space and the Oklab color space. Luckily, once we understood what the CIEXYZ color space is, then these two should be simpler to understand. Let’s dig into that next. CIELAB and Oklab Color Spaces As we saw before, the sRGB color space lacks many of the colors visible by the average human eye. And as modern screens got better at displaying more colors, CSS needed to adopt newer color spaces to fully take advantage of those newer displays. That wasn’t the only problem with sRGB — it also lacks perceptual uniformity, meaning that changes in the color’s chromaticity also change its perceived lightness. Check, for example, this demo by Adam Argyle: Created in 1976 by the CIE, CIELAB, derived from CIEXYZ, also encompasses all the colors visible by the human eye. It works with three coordinates: L for perceptual lightness, a for the amount of red-green, and b* for the amount of yellow-blue in the color. Credit: Linshang Technology It has a way better perceptual uniformity than sRGB, but it still isn’t completely uniform, especially in gradients involving blue. For example, in the following white-to-blue gradient, CIELAB shifts towards purple. Image Credits to Björn Ottosson As a final improvement, Björn Ottosson came up with the Oklab color space, which also holds all colors visible by the human eye while keeping a better perceptual uniformity. Oklab also uses the three L*a*b* coordinates. Thanks to all these improvements, it is the color space I try to use the most lately. Color Models When I was learning about these concepts, my biggest challenge after understanding color spaces was not getting them confused with color models and color gamuts. These two concepts, while complementary and closely related to color spaces, aren’t the same, so they are a common pitfall when learning about color. A color model refers to the mathematical description of color through tuples of numbers, usually involving three numbers, but these values don’t give us an exact color until we pair them with a color space. For example, you know that in the RGB color model, we define color through three values: red, green, and blue. However, it isn’t until we match it to an RGB color space (e.g., sRGB with display-p3) that we have a color. In this sense, a color space can have several color models, like sRGB, which uses RGB, HSL, and HWB. At the same time, a color model can be used in several color spaces. I found plenty of articles and tutorials where “color spaces” and “color models” were used interchangeably. And some places were they had a different definition of color spaces and models than the one provided here. For example, Chrome’s High definition CSS color guide defines CSS’s RGB and HSL as different color spaces, while MDN’s Color Space entry does define RGB and HSL as part of the sRGB color space. Personally, in CSS, I find it easier to understand the idea of RGB, HSL and HWB as different models to access the sRGB color space. Color Gamuts A color gamut is more straightforward to explain. You may have noticed how we have talked about a color space having more colors than another, but it would be more correct to say it has a “wider” gamut, since a color gamut is the range of colors available in a color space. However, a color gamut isn’t only restricted by color space boundaries, but also by physical limitations. For example, an older screen may decrease the color gamut since it isn’t able to display each color available in a given color space. In this case where a color can’t be represented (due to physical limitation or being outside the color space itself), it’s said to be “out of gamut”. Color Functions In CSS, the only color space available used to be sRGB. Nowadays, we can work with a lot of modern color spaces through their respective color functions. As a quick reference, each of the color spaces in CSS uses the following functions: sRGB: We can work in sRGB using the ol’ hexadecimal notation, named colors, and the rgb(), rgba(), hsl(), hsla() and hwb() functions. CIELAB: Here we have the lab() for Cartesian coordinates and lch() for polar coordinates. Oklab: Similar to CIELAB, we have oklab() for Cartesian coordinates and oklch() for polar coordinates. More through the color() and color-mix(). Outside these three color spaces, we can use many more using the color() and color-mix() functions. Specifically, we can use the RGB color spaces: rgb-linear, display-p3, a98-rgb, prophoto-rgb, rec2020 and the XYZ color space: xyz, xyz-d50, or xyz-d65. TL;DR Color spaces are a mapping between available colors and a coordinate system. In CSS, we have three main color spaces: sRGB, CIELAB, and Oklab, but many more are accessible through the color() function. Color models define color with tuples of numbers, but they don’t give us information about the actual color until we pair them with a color space. For example, the RGB model doesn’t mean anything until we assign it an RGB color space. Most of the time, we want to talk about how many colors a color space holds, so we use the term color gamut for the task. However, a color gamut is also tied to the physical limitations of a camera/display. A color may be out-of-gamut, meaning it can’t be represented in a given color space. In CSS, we can access all these color spaces through color functions, of which there are many. The CIEXYZ color space is extremely useful to define other color spaces, describe their gamuts, and convert between them. References Completely Painless Programmer’s Guide to XYZ, RGB, ICC, xyY, and TRCs (Elle Stone) Color Spaces (Bartosz Ciechanowski) The CIE XYZ and xyY Color Spaces(Douglas A. Kerr) From personal project to industry standard (Björn Ottosson) High definition CSS color guide (Adam Argyle) Color Spaces: Explained from the Ground Up (Video Tech Explained) Color Space (MDN) What Makes a Color Space Well Behaved? (Elle Stone) Footnotes 1 Light is the main cause of color, but color can be created by things other than light. For example, rubbing your closed eyes mechanically stimulates your retina, creating color in what’s called phosphene. ⤴️ 2 If negative light also makes you scratch your head, and for more info on how the CIEXYZ color space was created, I highly recommend Douglas A. Kerr The CIE XYZ and xyY Color Spaces paper. ⤴️ 3 We also need to define the darkest dark color (“black”) and the lightest light color (“white”). However, for well-behaved color spaces, these two can be abstracted from the reddest, blues, and greenest colors. ⤴️ Color Everything in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Color Functions
- Guides
- color
CSS has a number of functions that can be used to set, translate, and manipulate colors. Learn what they are and how they are used with a bunch of examples to get you started.
CSS Color Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
If you asked me a few months ago, “What does it take for a website to stand out?” I may have said fancy animations, creative layouts, cool interactions, and maybe just the general aesthetics, without pointing out something in particular. If you ask me now, after working on color for the better part of the year, I can confidently say it’s all color. Among all the aspects that make a design, a good color system will make it as beautiful as possible. However, color in CSS can be a bit hard to fully understand since there are many ways to set the same color, and sometimes they even look the same, but underneath are completely different technologies. That’s why, in this guide, we will walk through all the ways you can set up colors in CSS and all the color-related properties out there! Colors are in everything They are in your phone, in what your eye sees, and on any screen you look at; they essentially capture everything. Design-wise, I see the amazing use of colors on sites listed over at awwwards.com, and I’m always in awe. Not all color is the same. In fact, similar colors can live in different worlds, known as color spaces. Take for example, sRGB, the color space used on the web for the better part of its existence and hence the most known. While it’s the most used, there are many colors that are simply missing in sRGB that new color spaces like CIELAB and Oklab bring, and they cover a wider range of colors sRGB could only dream of, but don’t let me get ahead of myself. What’s a color space? A color space is the way we arrange and represent colors that exist within a device, like printers and monitors. We have different types of color spaces that exist in media (Rec2020, Adobe RGB, etc), but not all of them are covered in CSS. Luckily, the ones we have are sufficient to produce all the awesome and beautiful colors we need. In this guide, we will be diving into the three main color spaces available in CSS: sRGB, CIELAB, and OkLab. The sRGB Color Space The sRGB is one of the first color spaces we learn. Inside, there are three color functions, which are essentially notations to define a color: rgb(), hsl(), and hwb(). sRGB has been a standard color space for the web since 1996. However, it’s closer to how old computers represented color, rather than how humans understand it, so it had some problems like not being able to capture the full gamut of modern screens. Still, many modern applications and websites use sRGB, so even though it is the “old way” of doing things, it is still widely accepted and used today. The rgb() function rgb() uses three values, r, g, and b which specifies the redness, greenness, and blueness of the color you want. All three values are non-negative, and they go from 0 to 255. .element { color: rgb(245 123 151); } It also has an optional value (the alpha value) preceded by a forward slash. It determines the level of opacity for the color, which goes from 0 (or 0%) for a completely transparent color, to 1 (or 100%) for a fully opaque one. .element { color: rgb(245 123 151 / 20%); } There are two ways you can write inside rgb(). Either using the legacy syntax that separates the three values with commas or the modern syntax that separates each with spaces. You want to combine the two syntax formats, yes? That’s a no-no. It won’t even work. /* This would not work */ .element { color: rgb(225, 245, 200 / 0.5); } /* Neither will this */ .element { color: rgb(225 245 200, 0.5); } /* Or this */ .element { color: rgb(225, 245 200 / 0.5); } But, following one consistent format will do the trick, so do that instead. Either you’re so used to the old syntax and it’s hard for you to move on, continue to use the legacy syntax, or you’re one who’s willing to try and stick to something new, use the modern syntax. /* Valid (Modern syntax) */ .element { color: rgb(245 245 255 / 0.5); } /* Valid (Legacy syntax) */ .element { color: rgb(245, 245, 255, 0.5); } The rgba() function rgba() is essentially the same as rgb() with an extra alpha value used for transparency. In terms of syntax, the rgba() function can be written in two ways: Comma-separated and without percentages Space-separated, with the alpha value written after a forward slash (/) .element { color: rgba(100, 50, 0, 0.5); } .element { color: rgba(100 50 0 / 0.5); } So, what’s the difference between rgba() and rgb()? Breaking news! There is no difference. Initially, only rgba() could set the alpha value for opacity, but in recent years, rgb() now supports transparency using the forward slash (/) before the alpha value. rgb() also supports legacy syntax (commas) and modern syntax (spaces), so there’s practically no reason to use rgba() anymore; it’s even noted as a CSS mistake by folks at W3C. In a nutshell, rgb() and rgba() are the same, so just use rgb(). /* This works */ .element-1 { color: rgba(250 30 45 / 0.8); } /* And this works too, so why not just use this? */ .element-2 { color: rgb(250 30 45 / 0.8); } The hexadecimal notation The hexadecimal CSS color code is a 3, 4, 6, or 8 (being the maximum) digit code for colors in sRGB. It’s basically a shorter way of writing rgb(). The hexadecimal color (or hex color) begins with a hash token (#) and then a hexadecimal number, which means it goes from 0 to 9 and then skips to letters a to f (a being 10, b being 11, and so on, up to f for 15). In the hexadecimal color system, the 6-digit style is done in pairs. Each pair represents red (RR), blue (BB), and green (GG). Each value in the pair can go from 00 to FF, which it’s equivalent to 255 in rgb(). Notice how I used caps for the letters (F) and not lowercase letters like I did previously? Well, that’s because hexadecimals are not case-sensitive in CSS, so you don’t have to worry about uppercase or lowercase letters when dealing with hexadecimal colors. 3-digit hexadecimal. The 3-digit hexadecimal system is a shorter way of writing the 6-digit hexadecimal system, where each value represents the color’s redness, greenness, and blueness, respectively .element { color: #abc; } In reality, each value in the 3-digit system is duplicated and then translated to a visible color .element { color: #abc; /* Equals #AABBCC */ } BUT, this severely limits the colors you can set. What if I want to target the color 213 in the red space, or how would I get a blue of value 103? It’s impossible. That’s why you can only get a total number of 4,096 colors here as opposed to the 17 million in the 6-digit notation. Still, if you want a fast way of getting a certain color in hexadecimal without having to worry about the millions of other colors, use the 3-digit notation. 4-digit hexadecimal. This is similar to the 3-digit hexadecimal notation except it includes the optional alpha value for opacity. It’s a shorter way of writing the 8-digit hexadecimal which also means that all values here are repeated once during color translation. .element { color: #abcd; /* Same as #AABBCCDD */ } For the alpha value, 0 represents 00 (a fully transparent color) and F represents FF (a fully opaque color). 6-digit hexadecimal. The 6-digit hexadecimal system just specifies a hexadecimal color’s redness, blueness, and greenness without its alpha value for color opacity. .element { color: #abcdef; } 8-digit hexadecimal. This 8-digit hexadecimal system specifies hexadecimal color’s redness, blueness, greenness, and its alpha value for color opacity. Basically, it is complete for color control in sRGB. .element { color: #faded101; } The hsl() function Both hsl() and rgb() live in the sRGB space, but they access colors differently. And while the consensus is that hsl() is far more intuitive than rgb(), it all boils down to your preference. hsl() takes three values: h, s, and l, which set its hue, saturation, and lightness, respectively. The hue sets the base color and represents a direction in the color wheel, so it’s written in angles from 0deg to 360deg. The saturation sets how much of the base color is present and goes from 0 (or 0%) to 100 (or 100%). The lightness represents how close to white or black the color gets. One cool thing: the hue angle goes from (0deg–360deg), but we might as well use negative angles or angles above 360deg, and they will circle back to the right hue. Especially useful for infinite color animation. Pretty neat, right? Plus, you can easily get a complementary color from the opposite angle (i.e., adding 180deg to the current hue) on the color wheel. /* Current color */ .element { color: hsl(120deg 40 60 / 0.8); } /* Complementary color */ .element { color: hsl(300deg 40 60 / 0.8); } You want to combine the two syntax formats like in rgb(), yes? That’s also a no-no. It won’t work. /* This would not work */ .element { color: hsl(130deg, 50, 20 / 0.5); } /* Neither will this */ .element { color: hsl(130deg 50 20, 0.5); } /* Or this */ .element { color: hsl(130deg 50, 20 / 0.5); } Instead, stick to one of the syntaxes, like in rgb(): /* Valid (Modern syntax) */ .element { color: hsl(130deg 50 20 / 0.5); } /* Valid (Modern syntax) */ .element { color: hsl(130deg, 50, 20, 0.5); } The hsla() function hsla() is essentially the same with hsl(). It uses three values to represent its color’s hue (h), saturation (s), and lightness (l), and yes (again), an alpha value for transparency (a). We can write hsla() in two different ways: Comma separated Space separated, with the alpha value written after a forward slash (/) .element { color: hsla(120deg, 100%, 50%, 0.5); } .element { color: hsla(120deg 100% 50% / 0.5); } So, what’s the difference between hsla() and hsl()? Breaking news (again)! They’re the same. hsl() and hsla() both: Support legacy and modern syntax Have the power to increase or reduce color opacity So, why does hsla() still exist? Well, apart from being one of the mistakes of CSS, many applications on the web still use hsla() since there wasn’t a way to set opacity with hsl() when it was first conceived. My advice: just use hsl(). It’s the same as hsla() but less to write. /* This works */ .element-1 { color: hsla(120deg 80 90 / 0.8); } /* And this works too, so why not just use this? */ .element-2 { color: hsl(120deg 80 90 / 0.8); } The hwb() function hwb() also uses hue for its first value, but instead takes two values for whiteness and blackness to determine how your colors will come out (and yes, it also does have an optional transparency value, a, just like rgb() and hsl()). .element { color: hwb(80deg 20 50 / 0.5); } The first value h is the same as the hue angle in hsl(), which represents the color position in the color wheel from 0 (or 0deg) to 360 (or 360deg). The second value, w, represents the whiteness in the color. It ranges from 0/0% (no white) to 100/100% (full white if b is 0). The third value, b, represents the blackness in the color. It ranges from 0/0% (no black) to 100/100% (fully black if w is 0). The final (optional) value is the alpha value, a, for the color’s opacity, preceded by a forward slash The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). Although this color function is barely used, it’s completely valid to use, so it’s up to personal preference. Named colors CSS named colors are hardcoded keywords representing predefined colors in sRGB. You are probably used to the basic: white, blue, black, red, but there are a lot more, totaling 147 in all, that are defined in the Color Modules Level 4 specification. Named colors are often discouraged because their names do not always match what color you would expect. The CIELAB Color Space The CIELAB color space is a relatively new color space on the web that represents a wider color gamut, closer to what the human eye can see, so it holds a lot more color than the sRGB space. The lab() function For this color function, we have three axes in a space-separated list to determine how the color is set. .element { color: lab(50 20 20 / 0.9); } The first value l represents the degree of whiteness to blackness of the color. Its range being 0/(or 0%) (black) to 100 (or 100%) (white). The second value a represents the degree of greenness to redness of the color. Its range being from -125/0% (green) to125 (or 100%) (red). The third value b represents the degree of blueness to yellowness of the color. Its range is also from -125 (or 0%) (blue) to 125 (or 100%) (red). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). This is useful when you’re trying to obtain new colors and provide support for screens that do support them. Actually, most screens and all major browsers now support lab(), so you should be good. The CSS lab() color function’s a and b values are actually unbounded. Meaning they don’t technically have an upper or lower limit. But, at practice, those are their limits according to the spec. The lch() function The CSS lch() color function is said to be better and more intuitive than lab(). .element { color: lch(10 30 300deg); } They both use the same color space, but instead of having l, a, and b, lch uses lightness, chroma, and hue. The first value l represents the degree of whiteness to blackness of the color. Its range being 0 (or 0%) (black) to 100 (or 100%) (white). The second value c represents the color’s chroma (which is like saturation). Its range being from 0 (or 100%) to 150 or (or 100%). The third value h represents the color hue. The value’s range is also from 0 (or 0deg) to 360 (or 360deg). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). The CSS lch() color function’s chroma (c) value is actually unbounded. Meaning it doesn’t technically have an upper or lower limit. But, in practice, the chroma values above are the limits according to the spec. The OkLab Color Space Björn Ottosson created this color space as an “OK” and even better version of the lab color space. It was created to solve the limitations of CIELAB and CIELAB color space like image processing in lab(), such as making an image grayscale, and perceptual uniformity. The two color functions in CSS that correspond to this color space are oklab() and oklch(). Perceptual uniformity occurs when there’s a smooth change in the direction of a gradient color from one point to another. If you notice stark contrasts like the example below for rgb() when transitioning from one hue to another, that is referred to as a non-uniform perceptual colormap. Notice how the change from one color to another is the same in oklab() without any stark contrasts as opposed to rgb()? Yeah, OKLab color space solves the stark contrasts present and gives you access to many more colors not present in sRGB. OKlab actually provides a better saturation of colors while still maintaining the hue and lightness present in colors in CIELAB (and even a smoother transition between colors!). The oklab() function The oklab() color function, just like lab(), generates colors according to their lightness, red/green axis, blue/yellow axis, and an alpha value for color opacity. Also, the values for oklab() are different from that of lab() so please watch out for that. .element { color: oklab(30% 20% 10% / 0.9); } The first value l represents the degree of whiteness to blackness of the color. Its range being 0 (or 0%) (black) to 0.1 (or 100%) (white). The second value a represents the degree of greenness to redness of the color. Its range being from -0.4 (or -100%) (green) to 0.4 (or 100%) (red). The third value b represents the degree of blueness to yellowness of the color. The value’s range is also from -0.4 (or 0%) (blue) to 0.4 (or -100%) (red). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). Again, this solves one of the issues in lab which is perceptual uniformity so if you’re looking to use a better alternative to lab, use oklab(). The CSS oklab() color function’s a and b values are actually unbounded. Meaning they don’t technically have an upper or lower limit. But, theoretically, those are the limits for the values according to the spec. The oklch() function The oklch() color function, just like lch(), generates colors according to their lightness, chroma, hue, and an alpha value for color opacity. The main difference here is that it solves the issues present in lab() and lch(). .element { color: oklch(40% 20% 100deg / 0.7); } The first value l represents the degree of whiteness to blackness of the color. Its range being 0.0 (or 0%) (black) to 1.0 (or 100%) (white). The second value c represents the color’s chroma. Its range being from 0 (or 0%) to 0.4 (or 100%) (it theoretically doesn’t exceed 0.5). The third value h represents the color hue. The value’s range is also from 0 (or 0deg) to 360 (or 360deg). The fourth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). The CSS oklch() color function’s chroma (c) value is actually unbounded. Meaning it doesn’t technically have an upper or lower limit. But, theoretically, the chroma values above are the limits according to the spec. The color() function The color() function allows access to colors in nine different color spaces, as opposed to the previous color functions mentioned, which only allow access to one. To use this function, you must simply be aware of these 6 parameters: The first value specifies the color space you want to access colors from. They can either be srgb, srgb-linear, display-p3, a98-rgb, prophoto-rgb, rec2020, xyz, xyz-d50, or xyz-d65 The next three values (c1, c2, and c3) specifies the coordinates in the color space for the color ranging from 0.0 – 1.0. The sixth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%). The color-mix() function The color-mix() function mixes two colors of any type in a given color space. Basically, you can create an endless number of colors with this method and explore more options than you normally would with any other color function. A pretty powerful CSS function, I would say. .element { color-mix(in oklab, hsl(40 20 60) 80%, red 20%); } You’re basically mixing two colors of any type in a color space. Do take note, the accepted color spaces here are different from the color spaces accepted in the color() function. To use this function, you must be aware of these three values: The first value in colorspace specifies the interpolation method used to mix the colors, and these can be any of these 15 color spaces: srgb, srgb-linear, display-p3, a98-rgb, prophoto-rgb, rec2020, lab, oklab, xyz, xyz-d50, xyz-d65, hsl, hwb, lch, and oklch. The second and third values specifies an accepted color value and a percentage from 0% to 100%. The Relative Color Syntax Here’s how it works. We have: .element{ color-function(from origin-color c1 c2 c3 / alpha) } The first value from is a mandatory keyword you must set to extract the color values from origin-color. The second value, origin-color, represents a color function or value or even another relative color that you want to get color from. The next three values, c1, c2, and c3 represent the current color function’s color channels and they correspond with the color function’s valid color values. The sixth and final value is its alpha value for color’s opacity. The value’s range is from 0.0 (or 0%) to 1.0 (or 100%) which either set from the origin-color or set manually, Let’s take an example, say, converting a color from rgb() to lab(): .element { color: lab(from rgb(255 210 01 / 0.5) l a b / a); } All the values above will be translated to the corresponding colors in rgb(). Now, let’s take a look at another example where we convert a color from rgb() to oklch(): .element { color: oklch(from rgb(255 210 01 / 0.5) 50% 20% h / a); } Although, the l and c values were changed, the h and a would be taken from the original color, which in this case is a light yellowish color in rgb(). You can even be wacky and use math functions: All CSS color functions support the relative color syntax. The relative color syntax, simply put, is a way to access other colors in another color function or value, then translating it to the values of the current color function. It goes “from <color>” to another. .element { color: oklch(from rgb(255 210 01 / 0.5) calc(50% + var(--a)) calc(20% + var(--b)) h / a); } The relative color syntax is, however, different than the color() function in that you have to include the color space name and then fully write out the channels, like this: .element { color: color(from origin-color colorspace c1 c2 c3 / alpha); } Remember, the color-mix() function is not a part of this. You can have relative color functions inside the color functions you want to mix, yes, but the relative color syntax is not available in color-mix() directly. Color gradients CSS is totally capable of transitioning from one color to another. See the “CSS Gradients Guide” for a full run-down, including of the different types of gradients with examples. Visit the Guide Properties that support color values There are a lot of properties that support the use of color. Just so you know, this list does not contain deprecated properties. accent-color This CSS property sets the accent color for UI controls like checkboxes and radio buttons, and any other form element progress { accent-color: lightgreen; } Accent colors are a way to style unique elements in respect to the chosen color scheme. background-color Applies solid colors as background on an element. .element { background-color: #ff7a18; } border-color Shorthand for setting the color of all four borders. /* Sets all border colors */ .element { border-color: lch(50 50 20); } /* Sets top, right, bottom, left border colors */ .element { border-color: black green red blue; } box-shadow Adds shadows to element for creating the illusion of depth. The property accepts a number of arguments, one of which sets the shadow color. .element { box-shadow: 0 3px 10px rgb(0 0 0 / 0.2); } caret-color Specifies the color of the text input cursor (caret). .element { caret-color: lch(30 40 40); } color Sets the foreground color of text and text decorations. .element { color: lch(80 10 20); } column-rule-color Sets the color of a line between columns in a multi-column layout. This property can’t act alone, so you need to set the columns and column-rule-style property first before using this. .element { column: 3; column-rule-style: solid; column-rule-color: lch(20 40 40); /* highlight */ } fill Sets the color of the SVG shape .element { fill: lch(40 20 10); } flood-color Specifies the flood color to use for <feFlood> and <feDropShadow> elements inside the <filter> element for <svg>. This should not be confused with the flood-color CSS attribute, as this is a CSS property and that’s an HTML attribute (even though they basically do the same thing). If this property is specified, it overrides the CSS flood-color attribute .element { flood-color: lch(20 40 40); } lighting-color Specifies the color of the lighting source to use for <feDiffuseLighting> and <feSpecularLighting> elements inside the <filter> element for <svg>. .element { lighting-color: lch(40 10 20); } outline-color Sets the color of an element’s outline. .element { outline-color: lch(20 40 40); } stop-color Specifies the color of gradient stops for the <stop> tags for <svg>. .element { stop-color: lch(20 40 40); } stroke Defines the color of the outline of <svg>. .element { stroke: lch(20 40 40); } text-decoration-color Sets the color of text decoration lines like underlines. .element { text-decoration-color: lch(20 40 40); } text-emphasis-color Specifies the color of emphasis marks on text. .element { text-emphasis-color: lch(70 20 40); } text-shadow Applies shadow effects to text, including color. .element { text-shadow: 1px 1px 1px lch(50 10 30); } Almanac references Color functions Almanac on Feb 22, 2025 rgb() .element { color: rgb(0 0 0 / 0.5); } color Sunkanmi Fafowora Almanac on Feb 22, 2025 hsl() .element { color: hsl(90deg, 50%, 50%); } color Sunkanmi Fafowora Almanac on Jun 12, 2025 hwb() .element { color: hwb(136 40% 15%); } color Gabriel Shoyombo Almanac on Mar 4, 2025 lab() .element { color: lab(50% 50% 50% / 0.5); } color Sunkanmi Fafowora Almanac on Mar 12, 2025 lch() .element { color: lch(10% 0.215 15deg); } color Sunkanmi Fafowora Almanac on Apr 29, 2025 oklab() .element { color: oklab(25.77% 25.77% 54.88%; } color Sunkanmi Fafowora Almanac on May 10, 2025 oklch() .element { color: oklch(70% 0.15 240); } color Gabriel Shoyombo Almanac on May 2, 2025 color() .element { color: color(rec2020 0.5 0.15 0.115 / 0.5); } color Sunkanmi Fafowora Color properties Almanac on Apr 19, 2025 accent-color .element { accent-color: #f8a100; } color Geoff Graham Almanac on Jan 13, 2025 background-color .element { background-color: #ff7a18; } color Chris Coyier Almanac on Jan 27, 2021 caret-color .element { caret-color: red; } color Chris Coyier Almanac on Jul 11, 2022 color .element { color: #f8a100; } color Sara Cope Almanac on Jul 11, 2022 column-rule-color .element { column-rule-color: #f8a100; } color Geoff Graham Almanac on Jan 27, 2025 fill .element { fill: red; } color Geoff Graham Almanac on Jul 11, 2022 outline-color .element { outline-color: #f8a100; } color Mojtaba Seyedi Almanac on Dec 15, 2024 stroke .module { stroke: black; } color Geoff Graham Almanac on Aug 2, 2021 text-decoration-color .element { text-decoration-color: orange; } color Marie Mosley Almanac on Jan 27, 2023 text-emphasis .element { text-emphasis: circle red; } color Joel Olawanle Almanac on Jan 27, 2023 text-shadow p { text-shadow: 1px 1px 1px #000; } color Sara Cope Related articles & tutorials Article on Aug 12, 2024 Working With Colors Guide color Sarah Drasner Article on Aug 23, 2022 The Expanding Gamut of Color on the Web color Ollie Williams Article on Oct 13, 2015 The Tragicomic History of CSS Color Names color Geoff Graham Article on Feb 11, 2022 A Whistle-Stop Tour of 4 New CSS Color Features color Chris Coyier Article on Feb 7, 2022 Using Different Color Spaces for Non-Boring Gradients color Chris Coyier Article on Oct 29, 2024 Come to the light-dark() Side color Sara Joy Article on Sep 24, 2024 Color Mixing With Animation Composition color Geoff Graham Article on Sep 13, 2016 8-Digit Hex Codes? color Chris Coyier Article on Feb 24, 2021 A DRY Approach to Color Themes in CSS color Christopher Kirk-Nielsen Article on Apr 6, 2017 Accessibility Basics: Testing Your Page For Color Blindness color Chris Coyier Article on Mar 9, 2020 Adventures in CSS Semi-Transparency Land color Ana Tudor Article on Mar 4, 2017 Change Color of All Four Borders Even With `border-collapse: collapse;` color Daniel Jauch Article on Jan 2, 2020 Color contrast accessibility tools color Robin Rendle Article on Aug 14, 2019 Contextual Utility Classes for Color with Custom Properties color Christopher Kirk-Nielsen Article on Jun 26, 2021 Creating Color Themes With Custom Properties, HSL, and a Little calc() color Dieter Raber Article on May 4, 2021 Creating Colorful, Smart Shadows color Chris Coyier Article on Feb 21, 2018 CSS Basics: Using Fallback Colors color Chris Coyier Article on Oct 21, 2019 Designing accessible color systems color Robin Rendle Article on Jun 22, 2021 Mixing Colors in Pure CSS color Carter Li Article on Jul 26, 2016 Overriding The Default Text Selection Color With CSS color Chris Coyier Article on Oct 21, 2015 Reverse Text Color Based on Background Color Automatically in CSS color Robin Rendle Article on Dec 27, 2019 So Many Color Links color Chris Coyier Article on Aug 18, 2018 Switch font color for different backgrounds with CSS color Facundo Corradini Article on Jan 20, 2020 The Best Color Functions in CSS? color Chris Coyier Article on Dec 3, 2021 What do you name color variables? color Chris Coyier Article on May 8, 2025 Why is Nobody Using the hwb() Color Function? color Sunkanmi Fafowora Table of contents Colors are in everything What’s a color space? The sRGB Color Space The CIELAB Color Space The OkLab Color Space The color() function The color-mix() function The Relative Color Syntax Color gradients Properties that support color values Almanac references Related articles and tutorials CSS Color Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Updates for apps in the European Union
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, we’re introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions — an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developer’s website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 5
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, we’re introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions — an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developer’s website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 4
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, we’re introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions — an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developer’s website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 3
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, we’re introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions — an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developer’s website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 2
Welcome to Day 2 at WWDC25! Watch the Platforms State of the Union recap, then dive into all the updates to Swift, SwiftUI, and Xcode through group labs and video sessions.
WWDC25 Platforms State of the Union Recap Watch nowToday’s group labs
Developer Tools group lab View now Swift group lab View now Metal & game technologies group lab View now Camera & Photos frameworks group lab View nowFind out what’s new for Apple developers
Discover the latest advancements on all Apple platforms. With incredible new features in iOS, iPadOS, macOS, tvOS, visionOS, and watchOS, and major enhancements across languages, frameworks, tools, and services, you can create even more unique experiences in your apps and games.
Updated agreements and guidelines now available
The Apple Developer Program License Agreement and App Review Guidelines have been revised to support new features and updated policies, and to provide clarification. Please review the changes below.
Apple Developer Program License Agreement
- Section 3.3.3(D): Updated language on requirements for data and privacy.
- Section 3.3.3(N): Updated requirements for use of the ID Verifier APIs.
- Definitions, 3.3.3(P): Specified requirements for use of the Declared Age Range API.
- Definitions, 3.3.7(G): Specified requirements for use of the Wi-Fi Aware framework.
- Definitions, 3.3.7(H): Specified requirements for use of the TelephonyMessagingKit APIs.
- Definitions, 3.3.7(I): Specified requirements for use of the Default Dialer APIs.
- Definition, Section 3.3.8(H), Attachment 11: Specified requirements for use of EnergyKit.
- Definitions, 3.3.8(I): Specified requirements for use of the Foundation Models framework.
- Definitions, Attachment 4: Specified requirements for use of the iCloud Extended Share APIs.
- Section 6.4: Removed language on Bitcode submissions as it is no longer applicable, and replaced it with terms regarding iOS app widgets on CarPlay.
- Section 7.4(B): Updated and clarified requirements for TestFlight related to digital purchases and tester invitations.
- Section 7.7: Updated language on customization of icons and widgets.
- Section 7.8: Specified terms related to the Apple Games app.
- Attachment 6: Updated terms regarding the entity that distributes the map in China.
App Review Guidelines
- 3.1.2(a), bullet 2: This language has been deleted (“You may offer a single subscription that is shared across your own apps and services”).
- 3.1.2(a), bullet 5: This language has been relocated to Guideline 3.2.2(x).
- 3.2.1(viii): Clarified that financial apps must have necessary licensing and permissions in the locations where developers make them available.
- 3.2.2(x): This new guideline contains the language relocated from Guideline 3.1.2(a), bullet 5, and permits developers to otherwise incentivize users to take specific actions within app.
Please sign in to your account to accept the updated Apple Developer Program License Agreement.
Translations of the guidelines will be available on Apple Developer website within one month.
Today @ WWDC25: Day 1
WWDC25 is here! Watch a quick welcome video to help you get started, then dive into sessions and sign up for tomorrow’s group labs.
Welcome to WWDC25 Watch nowTuesday’s group labs
Developer Tools group lab View now Swift group lab View now Metal & game technologies group lab View now Camera & Photos frameworks group lab View nowIntroducing the 2025 Apple Design Award winners and finalists
An artistic puzzler with a wildlife twist. A translation app powered by machine learning and stickers. And a card game that’s been on quite a run. Say hello to the wildly inventive crop of 2025 Apple Design Award honorees.
Hello Developer: June 2025
WWDC25 is just days away! Here’s everything you need to get ready — and a big announcement to start things off. Say hello to the wildly inventive crop of 2025 Apple Design Award winners and finalists.
Sleek peek.
WWDC25 is almost here! Find out how to tune in to the Keynote and Platforms State of the Union on Monday, June 9.
Tax and Price updates for Apps, In-App Purchases, and Subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax and price updates
As of May 16:
Your proceeds from the sale of eligible apps and In‑App Purchases have been modified in Brazil to account for the Contribuições de Intervenção no Domínio Econômico (CIDE) tax introduction of 10% for developers based outside of Brazil.
Beginning June 2:
Pricing for apps and In-App Purchases will be updated for Brazil and Kazakhstan if you haven’t selected one of these storefronts as the base storefront for your app or In‑App Purchase.¹ The updates in Brazil also consider the 10% CIDE tax introduction.
If you’ve selected Brazil or Kazakhstan as the base storefront for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription. Prices also won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Additional upcoming changes
Beginning August 4:
All auto-renewable subscription price increases in Austria, Germany, and Poland will require customers to consent to the new price for their subscription to continue renewing.
- Price increases scheduled with a start date on or after August 4: All customers must consent to the new price. If a subscriber doesn’t agree to the new price or takes no action, Apple will continue to request consent approximately weekly through email, push notifications, and in-app messaging until their subscription expires at the end of their current billing cycle.
- Price increases scheduled with a start date before August 4: Current notice criteria will remain in effect, even if the renewal occurs after August 4 (for annual subscriptions, renewal could be as late as August 2026). See criteria, noting that consent may apply to customers depending on the size or velocity of your price increases.
To help ensure a smooth transition, we recommend avoiding scheduling price increases with a start date between August 2 and August 4.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by country or region
Set a price for an In-App Purchase
Learn more about your proceeds
¹ Excludes auto-renewable subscriptions.
Hello Developer: May 2025
In this edition: Join us to learn how to make your apps more accessible to everyone. Plus, check out our new and refreshed Pathways, and uncover the time-traveling secrets of the Apple Design Award-winning game The Wreck.
Random access memories: Inside the time-shifting narrative of The Wreck
The Wreck is filed under games, but it’s also been called a visual novel, an interactive experience, and a playable movie. Florent Maurin is OK with all of it. “I like to think we’re humbly participating in expanding the idea of what a video game can be,” he says.
Maurin is the co-writer, designer, and producer of The Wreck — and here we’ll let you decide what to call it. The Wreck tells the tale of Junon, a writer who’s abruptly called to a hospital to make a life-changing decision involving her mother. The story is anchored by the accident that lends the game its name, but the ensuing narrative is splintered, and begins to take shape only as players navigate through seemingly disconnected scenes that can be viewed multiple times from different perspectives. The Wreck is far from light. But its powerful story and unorthodox mechanics combine for a unique experience.
“We tried to make a game that’s a bit off the beaten path,” says Maurin, who’s also the president and CEO of The Pixel Hunt studio, “and hopefully it connects with people.”
ADA FACT SHEET
The Wreck- Winner: Social impact
- Team: The Pixel Hunt
- Available on: iPhone, iPad
- Team size: 4
Maurin is a former children’s journalist who worked at magazines and newspapers in his native France. After nearly 10 years in the field, he pivoted to video games, seeing them as a different way to share real stories about real people. “Reality is a source of inspiration in movies, novels, and comic books, but it’s almost completely absent in the gaming landscape,” he says. “We wanted to challenge that.”
Founded in 2014, The Pixel Hunt has released acclaimed titles like the App Store Award–winning historical adventure Inua and the text-message adventure Bury Me, My Love. It was near the end of the development process for the latter that Maurin and his daughter were involved in a serious car accident.
“It was honestly like a movie trope,” he says. “Time slowed down. Weird memories that had nothing to do with the moment flashed before my eyes. Later I read that the brain parses through old memories to find relevant knowledge for facing that kind of situation. It was so sudden and so intense, and I knew I wanted to make something of it. And what immediately came to mind was a game.”
Junon's interactions with the hospital staff drive the narrative in The Wreck.
But Maurin was too close to the source material; the accident had left a lasting impact, and he separated himself from the creative process. “I think I was trying to protect myself from the intensity of that feeling,” he says. “That’s when Alex, our art director, told me, ‘Look, this is your idea, and I don’t think it’ll bloom if you don’t really dig deep and own the creative direction.’ And he was right.”
That was art director Alexandre Grilletta, who helmed the development team alongside lead developer Horace Ribout, animator Peggy Lecouvey, sound designers Luis and Rafael Torres, and Maurin’s sister, Coralie, who served as a “second brain” during writing. (In a nice bit of serendipity, the game’s script was written in an open-source scripting language developed by Inkle, which used it for their own Apple Design Award-winning game, Overboard, in 2022.)
Junon's sister might not be an entirely welcome presence in The Wreck.
The story of The Wreck is split into two parts. The first — what the team calls the “last day” — follows Junon at the hospital while she faces her mother’s situation as well as revealing interactions with her sister and ex-husband. Maurin says the “last day” was pretty straightforward from a design standpoint. “We knew we wanted a cinematic look,” he says, “so we made it look like a storyboard with some stop-motion animation and framing. It was really nothing too fancy. The part that was way more challenging was the memories.”
Those “memories” — and the backstory they tell — employ a clever mechanism in which players view a scene as a movie and have the ability to fast-forward or rewind the scene. These memory scenes feel much different; they’re dreamlike and inventive, with swooping camera angles, shifting perspectives, and words that float in the air. “I saw that first in What Remains of Edith Finch,” says Maurin. “I thought it was an elegant way of suggesting the thing that triggers a character’s brain in that moment.”
Junon's thoughts are often conveyed in floating phrases that surround her in stressful moments.
Successive viewings of these memories can reveal new details or cast doubt on their legitimacy — something Maurin wrote from experience. “I’ll give you an example,” he says. “When my parents brought my baby sister home from the hospital, I remember the exact moment they arrived in the car. It’s incredibly vivid. But the weird part is: This memory is in the third person. I see myself tiptoeing to the window to watch them in the street — which is impossible! I rewrote my own memory for some reason, and only my brain knows why it works like that. But it feels so real.”
Throughout the development process, Maurin and team held close to the idea of a “moving and mature” story. In fact, early prototypes of The Wreck were more gamified — in one version, players grabbed floating items — but playtesters found the activity distracting. “It took them out of the story,” Maurin says. “It broke the immersion. And that was counterproductive to our goal.”
Items in The Wreck — like this tin of peppermints — often carry a larger meaning.
Maurin admits that approaching games with this mindset can be a challenge. “Some players are curious about our games and absolutely love them. Some people think, ‘These don’t fit the perception of what I think I enjoy.’ And maybe the games are for them, and maybe they’re not. But this is what we’ve been doing for 11 years. And I think we're getting better at it.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Updated guidelines now available
The App Review Guidelines have been updated for compliance with a United States court decision regarding buttons, external links, and other calls to action in apps. These changes affect apps distributed on the United States storefront of the App Store, and are reflected in updates to Guidelines 3.1.1, 3.1.1(a), 3.1.3, and 3.1.3(a).
View the App Review Guidelines
Translations of the guidelines will be available on Apple Developer website within one month.
Hello Developer: April 2025
In this edition: Revisit foundational sessions, join us to dive into SwiftUI, and meet an Apple Design Award winner that defies description.
Rooms at the top: How this ADA-winning team built a title that defies description
Ask Jason Toff whether his Apple Design Award winner is a game or an app, and his answer is yes.
“There’s no one-sentence description for Rooms, and that can be a blessing,” laughs Toff, CEO and head designer of Things, Inc. “It’s not entirely a game, and it’s not entirely a tool. It’s more like a toy.”
It’s also a blank canvas, cozy game, coding teacher, and social network — but we’re getting ahead of ourselves. At its heart, Rooms is a collection of user-generated 3-D spaces that feels like the open-ended world of the early internet. Start with an empty room or existing template, then fill it with an array of voxel decorations, items, pets, and avatars to create whatever space you like: a college apartment, medieval castle chamber, floating fantasy realm, pirate ship, or a Weezer concert (really), to name just a few. The only limits are the room’s boundaries — and Rooms fans have even gotten around those. “Our 404 page is a room with no walls,” Toff says, “so people just started copying it to work around the constraint.”
ADA FACT SHEET
Rooms- Winner: Visuals and Graphics
- Team: Things, Inc.
- Available on: iOS, iPadOS
- Team size: 4
Download Rooms from the App Store
In fact, that community element is a strong point: This creative tapestry of quirky games, tranquil havens, and clever ideas has been conjured by real people, which makes Rooms a social network as well. What’s more, users can click on each item to reveal its underlying code, offering them more options for customization.
To create Rooms — which, incidentally, won the ADA for Visuals and Graphics in games — Toff and cofounders Nick Kruge and Bruno Oliveira threw themselves back into their childhoods. “I was obsessed with Legos as a kid,” says Toff, not unexpectedly. “I found myself wondering, ‘What’s the digital equivalent of that?’”
Rooms isn’t just about rooms; creators have plenty of ways to noodle on their ideas.
Drawing on that inspiration — as well as Toff’s experiences with Kid Pix on his dad’s 1989-era Mac — the Rooms team began envisioning something that, as Oliveira says, kept the floor low but the ceiling high. “We wanted anyone from 4-year-olds to their grandparents to be able to use Rooms,” he says, “and that meant making something free-form and creative.”
It also meant building something that gave a sense of approachability and creativity, which led them right to voxels. “Blocks have a charm, but they can also be kind of ugly,” Toff laughs. “Luckily, Bruno’s were cute and soft, so they felt approachable and familiar.” And from Oliveira’s side, blocks offered a practical value. “It’s much easier to do 3-D modeling with blocks,” says Oliveira. “You can just add or remove voxels whenever you want, which lowers the bar for everyone.”
We wanted anyone from 4-year-olds to their grandparents to be able to use Rooms, and that meant making something free-form and creative.
Jason Toff, CEO and head designer of Things, Inc.
Rooms launched in 2023 as a web-based app that included 1,000 voxel objects and allowed users to write their own code. It gained traction through both word of mouth and, more directly, a video that went viral in the cozy-gaming community. “All of a sudden, we had all these people coming,” says Oliveira, “and we realized we needed to prioritize the mobile app. Nick was like, ‘I think we can get feature parity with desktop on the iPhone screen,’ and we basically pulled a rabbit out of a hat.” Today, the vast majority of Rooms users are on mobile, where they spend the bulk of their time editing. “We were just shocked by how much time people were spending making rooms,” he says. “These weren’t quick five-minute projects. We did not anticipate that.”
Of course the Things, Inc. team rebuilt their own offices in Rooms.
All that building fed into a social aspect as well. Toff says most of the items in Rooms are now created, edited, and amplified by lots of different users. “Here’s a good example: We have a sway effect that makes things wave back and forth a little,” he says. “Someone realized that if they put some branches on a tree and added that effect, the tree immediately looked alive. Now everyone’s doing that. There’s a real additive effect to building in Rooms.” Today, the Rooms library contains more than 10,000 items.
There’s a lot of power under the hood, too. “Rooms uses a Lua scripting language that runs in a C++ context,” says Oliveira, “so it’s kind of Lua, encased in C++, encased in Unity, encased in iOS.” Every room, he says, is a new Unity instance. And adding native iOS elements — like sliders on the Explore page and a bottom navigation — gives what he calls the “design chef’s kiss.”
An early sketch of Rooms shows how the room design came together early in the process.
Like its community, the Rooms team is used to moving fast. “One day I said, ‘It would be cool if this had a D-pad and A/B buttons,” says Toff, “and about 10 hours later Bruno was like, ‘Here you go.’” On another lark, Toff mentioned that it would be fun to let users fly around their rooms, and Kruge and Oliveira promptly created a “camera mode” that’s come to be known internally as the “Jason-Cam.”
That’s satisfying to a team that simply set out to build a cutting-edge plaything. “We always had this metaphor that Rooms was a swimming pool with a shallow side and a deep side,” says Oliveira. “It should be fun for people dabbling in the shallow side. But it should also be amazing for people swimming in the deep end. If you just want to look at rooms, you can. But you can also dive all the way down and write complicated code. There’s something for everyone.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
WWDC25: June 9-13, 2025
Join the worldwide developer community online for a week of technology and creativity.
Be there for the reveal of the latest Apple tools, frameworks, and features. Learn to elevate your apps and games through video sessions hosted by Apple engineers and designers. Engage with Apple experts in labs and connect with the worldwide developer community. All online and at no cost.
Assassin’s Creed Shadows comes to Mac
It’s an ice-cold late winter’s morning in Canada, but the offices of Ubisoft Quebec are ablaze with excitement.
The Ubisoft team is preparing the release of Assassin’s Creed Shadows, the 14th main entry in the series and an evolution for the franchise in nearly every detail. It’s set in feudal 16th-century Japan, a rich and elegant period that’s been long sought-after by fans and Ubisoft team members alike. It introduces a pair of fierce protagonists: Yasuke, a powerful warrior of African origin, and Naoe, an agile Shinobi assassin, both brought to life with attention to historical accuracy. Its world feels alive with an ever-changing dynamism that’s apparent in everything from the shifting weather to the rotating seasons to the magical interplay of light and shadow.
And what’s more, it’s set to release on Mac the same day it arrives on PCs and consoles.
“It’s been a longtime dream to bring the game to Mac,” says Ubisoft executive producer Marc-Alexis Côté, who debuted the game on Mac during the WWDC24 Keynote. “It’s incredible that I can now open a MacBook Pro and get this level of immersion.” Shadows will also be coming later to iPad with M-series chips.
Naoe, one of the game’s two protagonists, is an agile assassin who’s at her best when striking from the shadows.
Today marks one of the first times that the gaming community will get its hands on Shadows, and to celebrate the occasion, the Ubisoft offices — a mix of cozy chalet-worthy reclaimed wood and wide-open windows that afford a view of snowy Quebec City rooftops — have been reskinned with an Assassin’s Creed theme, including a display that emphasizes the heft of Yasuke’s weapons, especially an imposing-looking 13-pound model of the character’s sword. (On this day, the display is hosted by associate game director Simon Lemay-Comtois, who appears quite capable of wielding it.)
Download Assassin's Creed Shadows from the Mac App Store
Côté calls Shadows his team’s “most ambitious” game. In crafting the game’s expansive world, Ubisoft’s development team took advantage of an array of advanced Mac technologies: Metal 3 (working in concert with Ubisoft’s next-generation Anvil engine), Apple silicon, and a mix of HDR support and real-time ray tracing on Macs with M3 and M4 that Côté says was “transformative” in creating the game’s immersion.
It’s been a longtime dream to bring the game to Mac.
Marc-Alexis Côté, Ubisoft executive producer
“Seeing those millions of lines of code work natively on a Mac was a feeling that’s hard to describe,” Côté says. “When you look at the game’s performance, the curve Apple is on with successive improvements to the M-series chips year after year, and the way the game looks on an HDR screen, you’re like, ‘Is this real?’”
Assassin’s Creed Shadows is a balance of the technical and creative. For the former, associate technical director Mathieu Belanger says the capabilities of Mac laid the groundwork for technical success. “The architecture of the hardware is so well done, thanks in part to the unified memory between the GPU and CPU. That made us think the future is bright for gaming on the platform. So many things about doing this on Mac were great right out of the box.”
Naoe’s counterpart, Yasuke, prefers the use of brute force.
On the creative side, Ubisoft creative director Jonathan Dumont focused on a different opportunity. “The important thing was: Does this feel right? Is it what we want to send to players? And the answer was yes.”
The creative team’s goal was nothing short of “making this world feel alive,” says Martin Bedard, a 20-year Ubisoft veteran who served as the game’s technology director (and is very good at playing as Naoe). “You’re put into a moment that really existed,” he says. “This story is your playground.”
There are also fluffy kittens. We’ll get to those.
The ever-changing seasons lend an incredible variety to the game’s environments.
And there’s tremendous power behind the beauty, because the game’s biomes, seasons, weather, and lighting are all dynamic creations. The sunset hour bathes the mountains in soft purple light; the sun’s rays float in through leaves and temple roofs. Pretty much every room has a candle in it, which means the light is always changing. “Look at the clouds here,” says Bedard, pointing at the screen. “That’s not a rendering. These are all fluid-based cloud simulations.”
“Japan feels like it’s 80 percent trees and mountains,” says Dumont. “If you’re building this world without the rain, and the winds, and the mountains, it doesn’t feel right.”
Wherever you are, wherever you go, everything is beautiful and alive.
Mathieu Belanger, associate technical director
And those winds? “We developed a lot of features that were barely possible before, and one of them was a full simulation of the wind, not just an animation,” says Belanger. “We even built a humidity simulation that gathers clouds together.” For the in-game seasons, Ubisoft developed an engine that depicted houses, markets, and temples, in ever-changing conditions. “This was all done along the way over the past four years,” he says.
To pursue historical accuracy, Dumont and the creative team visited Japan to study every detail, including big-picture details (like town maps) to very specific ones (like the varnish that would have been applied to 16th-century wood). It wasn’t always a slam dunk, says Côté: In one visit, their Japanese hosts recommended a revision to the light splashing against the mountains. “We want to get all those little details right,” he says. (A “full-immersion version,” entirely in Japanese with English subtitles, is available.)
To recreate the world of 16th-century Japan, the Ubisoft creative visited Japan to study every detail.
Ubisoft’s decision to split the protagonist into two distinct characters with different identities, skill sets, origin stories, and class backgrounds came early in the process. (“That was a fun day,” laughs Belanger.) Ubisoft team members emphasize that choosing between Naoe and Yasuke is a matter of personal preference — lethal subtlety vs. brute force. Players can switch between characters at any time, and, as you might suspect, the pair grows stronger together as the story goes on. Much of Naoe’s advantage comes from her ability to linger in the game’s shadows — not just behind big buildings, but wherever the scene creates a space for her to hide. “The masterclass is clearing out a board without being spotted once,” says Bedard.
(The Hideout is) peaceful. You can say, ‘I feel like putting some trees down, seeing what I collected, upgrading my buildings, and petting the cats.’
Jonathan Dumont, Ubisoft creative director
Which brings us to the Hideout, Naoe and Yasuke’s home base and a bucolic rural village that acts as a zen-infused respite from the ferocity of battle. “It’s a place that welcomes you back,” says Dumont. It’s eminently customizable, both from a game-progression standpoint but also in terms of aesthetics. Where the battle scenes are a frenzy of bruising combat or stealth attacks, the Hideout is a refuge for supplies, artwork, found objects, and even a furry menagerie of cats, dogs, deer, and other calming influences. “There are progressions, of course,” says Dumont, “but it’s peaceful. You can say, ‘I feel like putting some trees down, seeing what I collected, upgrading my buildings, and petting the cats.”
“The kittens were a P1 feature,” laughs associate game director Dany St-Laurent.
Yasuke prepares to face off against an opponent in what will likely be a fruitful battle.
Yet for all those big numbers, Dumont says the game boils down to something much simpler. “I just think the characters work super-well together,” he says. “It’s an open-world game, yes. But at its core, it features two characters you’ll like. And the game is really about following their journey, connecting with them, exploring their unique mysteries, and seeing how they flow together. And I think the way in which they join forces is one of the best moments in the franchise.”
And if the Ubisoft team has its way, there will be plenty more moments to come. “I think the game will scale for years to come on the Mac platform,” says Côté. “Games can be more and more immersive with each new hardware release. We’re trying to create something here where more people can come with day-one games on the Mac, because I think it’s a beautiful platform.”
Hello Developer: March 2025
In this edition: An incredible AAA game comes to Mac. Plus, the latest on International Women’s Day activities, WeChat, and more.
Apple Developer is now on WeChat
Check out the official Apple Developer WeChat account to find news, announcements, and upcoming activities for the developer community.
Get ready with the latest beta releases
The beta versions of iOS 18.4, iPadOS 18.4, macOS 15.4, tvOS 18.4, visionOS 2.4, and watchOS 11.4 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 16.3.
As previewed last year, iOS 18.4 and iPadOS 18.4 include support for default translation apps for all users worldwide, and default navigation apps for EU users.
Beginning April 24, 2025, apps uploaded to App Store Connect must be built with Xcode 16 or later using an SDK for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.
New requirement for apps on the App Store in the European Union
As of today, apps without trader status have been removed from the App Store in the European Union (EU) until trader status is provided and verified by Apple.
Account Holders or Admins in the Apple Developer Program will need to enter this status in App Store Connect to comply with the Digital Services Act.
New features for APNs token authentication are now available
You can now take advantage of upgraded security options when creating new token authentication keys for the Apple Push Notification service (APNs).
Team-scoped keys enable you to restrict your token authentication keys to either development or production environments, providing an additional layer of security and ensuring that keys are used only in their intended environments.
Topic-specific keys provide more granular control by enabling you to associate each key with a specific bundle ID, allowing for more streamlined and organized key management. This is particularly beneficial for large organizations that manage multiple apps across different teams.
Your existing keys will continue to work for all push topics and environments. At this time, you don’t have to update your keys unless you want to take advantage of the new capabilities.
For detailed instructions on how to secure your communications with APNs, read Establishing a token-based connection to APNs.
Upcoming changes to offers and trials for subscriptions in South Korea
Starting February 14, 2025, new regulatory requirements in South Korea will apply to all apps with offers and trials for auto-renewing subscriptions.
To comply, if you offer trials or offers for auto-renewing subscriptions to your app or game, additional consent must be obtained for your trial or offer after the initial transaction. The App Store will help to get consent by informing the affected subscribers with an email, push notification, and in-app price consent sheet, and asking your subscribers to agree to the new price.
This additional consent must be obtained from customers within 30 days from the payment or conversion date for:
- Free to paid trials
- Discounted subscription offers to standard-price subscriptions
Apps that do not offer a free trial or discounted offer before a subscription converts to the regular price are not affected.
Tax and price updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax and pricing updates for FebruaryAs of February 6:
Your proceeds from the sale of eligible apps and In‑App Purchases have been modified in:
- Azerbaijan: value-added tax (VAT) introduction of 18%
- Peru: VAT introduction of 18%
- Slovakia: Standard VAT rate increase from 20% to 23%
- Slovakia: Reduced VAT rate introduction of 5% for ebooks
- Estonia: Reduced VAT rate increase from 5% to 9% for news publications, magazines, and other periodicals
- Finland: Reduced VAT rate increase from 10% to 14% for ebooks
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Azerbaijan and Peru.¹
As of February 24:
Pricing for apps and In-App Purchases will be updated for the Azerbaijan and Peru storefronts if you haven’t selected one of these as the base for your app or In‑App Purchase.² These updates also consider VAT introductions listed in the tax updates section above.
If you’ve selected the Azerbaijan or Peru storefront as the base for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription. Prices also won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by country or region
Set a price for an In-App Purchase
Beginning April 1:
As a result of last year’s change in Japan’s tax regulations, Apple (through iTunes K.K. in Japan) is now designated as a Specified Platform Operator by the Japan tax authority. All paid apps and In-App Purchases, (including game items, such as coins) sold by non-Japan-based developers on the App Store in Japan will be subject to the platform tax regime. Apple will collect and remit a 10% Japanese consumption tax (JCT) to the National Tax Agency JAPAN on such transactions at the time of purchase. Your proceeds will be adjusted accordingly.
Please note any prepaid payment instruments (such as coins) sold prior to April 1, 2025, will not be subject to platform taxation, and the relevant JCT compliance should continue to be managed by the developer.
For specific information on how the JCT affects in-game items, see Question 7 in the Tax Agency of Japan’s Q&A about Platform Taxation of Consumption Tax.
Learn more about your proceeds
¹ Translations of the updated agreement are available on the Apple Developer website today.
² Excludes auto-renewable subscriptions.
Game distribution on the App Store in Vietnam
The Vietnamese Ministry of Information and Communications (MIC) requires games to be licensed to remain available on the App Store in Vietnam. To learn more and apply for a game license, review the regulations.
Once you have obtained your license:
- Sign in to App Store Connect.
- Enter the license number and the associated URL in the description section of your game’s product page.
- Note that you only need to provide this information for the App Store localization displayed on the Vietnam storefront.
- Submit an update to App Review.
If you have questions on how to comply with these requirements, please contact the Authority of Broadcasting and Electronic Information (ABEI) under the Vietnamese Ministry of Information and Communications.
Hello Developer: February 2025
In this edition: The latest on developer activities, the Swift Student Challenge, the team behind Bears Gratitude, and more.
The good news bears: Inside the adorably unorthodox design of Bears Gratitude
Here’s the story of how a few little bears led their creators right to an Apple Design Award.
Bears Gratitude is a warm and welcoming title developed by the Australian husband-and-wife team of Isuru Wanasinghe and Nayomi Hettiarachchi.
Journaling apps just don’t get much cuter: Through prompts like “Today isn’t over yet,” “I’m literally a new me,” and “Compliment someone,” the Swift-built app and its simple hand-drawn mascots encourage people to get in the habit of celebrating accomplishments, fostering introspection, and building gratitude. “And gratitude doesn’t have to be about big moments like birthdays or anniversaries,” says Wanasinghe. “It can be as simple as having a hot cup of coffee in the morning.”
ADA FACT SHEET
Bears Gratitude- Winner: Delight and Fun
- Available on: iOS, iPadOS, macOS
- Team size: 2
Download Bears Gratitude from the App Store
Wanasinghe is a longtime programmer who’s run an afterschool tutoring center in Sydney, Australia, for nearly a decade. But the true spark for Bears Gratitude and its predecessor, Bears Countdown, came from Hettiarachchi, a Sri Lankan-born illustrator who concentrated on her drawing hobby during the Covid-19 lockdown.
Wanasinghe is more direct. “The art is the heart of everything we do,” he says.
Bears Gratitude was developed by the Australian husband-and-wife team of Isuru Wanasinghe and Nayomi Hettiarachchi.
In fact, the art is the whole reason the app exists. As the pandemic months and drawings stacked up, Hettiarachchi and Wanasinghe found themselves increasingly attached to her cartoon creations, enough that they began to consider how to share them with the world. The usual social media routes beckoned, but given Wanasinghe’s background, the idea of an app offered a stronger pull.
“In many cases, you get an idea, put together a design, and then do the actual development,” he says. “In our case, it’s the other way around. The art drives everything.”
The art is the heart of everything we do.
Isuru Wanasinghe, Bears Gratitude cofounder
With hundreds of drawings at their disposal, the couple began thinking about the kinds of apps that could host them. Their first release was Bears Countdown, which employed the drawings to help people look ahead to birthdays, vacations, and other marquee moments. Countdown was never intended to be a mass-market app; the pair didn’t even check its launch stats on App Store Connect. “We’d have been excited to have 100 people enjoy what Nayomi had drawn,” says Wanasinghe. “That’s where our heads were at.”
But Countdown caught on with a few influencers and become enough of a success that the pair began thinking of next steps. “We thought, well, we’ve given people a way to look forward,” says Wanasinghe. “What about reflecting on the day you just had?’”
Hettiarachchi’s art samples get a close inspection from one of her trusted associates.
Gratitude keeps the cuddly cast from Countdown, but otherwise the app is an entirely different beast. It was also designed in what Wanasinghe says was a deliberately unusual manner. “Our design approach was almost bizarrely linear,” says Wanasinghe. “We purposely didn’t map out the app. We designed it in the same order that users experience it.”
Other unorthodox decisions followed, including the absence of a sign-in screen. “We wanted people to go straight into the experience and start writing,” he says. The home-screen journaling prompts are presented via cards that users flip through by tapping left and right. “It’s definitely a nonstandard UX,” says Wanasinghe, “but we found over and over again that the first thing users did was flip through the cards.”
Our design approach was almost bizarrely linear. We purposely didn’t map out the app. We designed it in the same order that users experience it.
Isuru Wanasinghe, Bears Gratitude cofounder
Another twist: The app’s prompts are written in the voice of the user, which Wanasinghe says was done to emphasize the personal nature of the app. “We wrote the app as if we were the only ones using it, which made it more relatable,” he says.
Then there are the bears, which serve not only as a distinguishing hook in a busy field, but also as a design anchor for its creators. “We’re always thinking: ‘Instead of trying to set our app apart, how do we make it ours?’ We use apps all the time, and we know how they behave. But here we tried to detach ourselves from all that, think of it as a blank canvas, and ask, ‘What do we want this experience to be?’”
Early design sketches for Bears Gratitude show the collection of swipe-able prompt cards.
Bears Gratitude isn’t a mindfulness app — Wanasinghe is careful to clarify that neither he nor Hettiarachchi are therapists or mental health professionals. “All we know about are the trials and tribulations of life,” he says.
But those trials and tribulations have reached a greater world. “People have said, ‘This is just something I visit every day that brings me comfort,’” says Wanasinghe. “We’re so grateful this is the way we chose to share the art. We’re plugged into people’s lives in a meaningful way.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Apply for the Swift Student Challenge now through February 23
Submissions for the Swift Student Challenge 2025 are now open through February 23. You have three more weeks to design, test, refine, and submit your app playground for consideration to be named one of 350 winners.
What to know:
- The Challenge is free to enter — you just need access to an iPad or Mac with Swift Playground or Xcode.
- The best app ideas are personal — let your passion shine through your work.
- No formal coding experience required — the Challenge is open to students of all levels.
- Your app playground doesn’t need to be intricate — it should be experienced within 3 minutes or less.
Where to start:
- Explore tools and tutorials to build an incredible app playground.
- Get inspired by last year’s Distinguished Winners, learn about their winning apps, and read about their experiences at Apple Park.
Introducing the Advanced Commerce API
The App Store facilitates billions of transactions annually to help developers grow their businesses and provide a world-class customer experience. To further support developers’ evolving business models — such as exceptionally large content catalogs, creator experiences, and subscriptions with optional add-ons — we’re introducing the Advanced Commerce API.
Developers can apply to use the Advanced Commerce API to support eligible App Store business models and more flexibly manage their In-App Purchases within their app. These purchases leverage the power of the trusted App Store commerce system, including end-to-end payment processing, tax support, customer service, and more, so developers can focus on providing great app experiences.
Apps without trader status will be removed from the App Store in the EU
Starting February 17, 2025: Due to the European Union’s Digital Services Act, apps without trader status will be removed from the App Store in the European Union until trader status is provided and verified, if necessary.
As a reminder, Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect for apps on the App Store in the European Union in order to comply with the Digital Services Act.
Reminder: Upcoming Changes to the App Store Receipt Signing Intermediate Certificate
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate is being updated to use the SHA-256 cryptographic algorithm. This certificate is used to sign App Store receipts, which are the proof of purchase for apps and In-App Purchases.
This update is being completed in multiple phases and some existing apps on the App Store may be impacted by the next update, depending on how they verify receipts.
Starting January 24, 2025, if your app performs on-device receipt validation and doesn’t support the SHA-256 algorithm, your app will fail to validate the receipt. If your app prevents customers from accessing the app or premium content when receipt validation fails, your customers may lose access to their content.
If your app performs on-device receipt validation, update your app to support certificates that use the SHA-256 algorithm; alternatively, use the AppTransaction and Transaction APIs to verify App Store transactions.
For more details, view TN3138: Handling App Store receipt signing certificate changes.
Algorithm changes to server connections for Apple Pay on the Web
Starting next month, Apple will change the supported algorithms that secure server connections for Apple Pay on the Web. In order to maintain uninterrupted service, you’ll need to ensure that your production servers support one or more of the designated six ciphers before February 4, 2025.
These algorithm changes will affect any secure connection you’ve established as part of your Apple Pay integration, including the following touchpoints:
- Requesting an Apple Pay payment session (Apple Pay on the Web only)
- Renewing your domain verification (Apple Pay on the Web only)
- Receiving and handling merchant token notifications for recurring, deferred, and automatic-reload transactions (Apple Pay on the Web and in app)
- Creating and updating Wallet Orders (Apple Pay on the Web and in app)
- Managing merchant onboarding via the Apple Pay Web Merchant Registration API (payment service provider (PSP) and e-commerce platforms only)
Hello Developer: January 2025
In the first edition of the new year: Bring SwiftUI to your app in Cupertino, get ready for the Swift Student Challenge, meet the team behind Oko, and more.
Walk this way: How Oko leverages AI to make street crossings more accessible
Oko is a testament to the power of simplicity.
The 2024 Apple Design Award winner for Inclusivity and 2024 App Store Award winner for Cultural Impact leverages Artificial Intelligence to help blind or low-vision people navigate pedestrian walkways by alerting them to the state of signals — “Walk,” “Don’t Walk,” and the like — through haptic, audio, and visual feedback. The app instantly affords more confidence to its users. Its bare-bones UI masks a powerful blend of visual and AI tools under the hood. And it’s an especially impressive achievement for a team that had no iOS or Swift development experience before launch.
“The biggest feedback we get is, ‘It’s so simple, there’s nothing complex about it,’ and that’s great to hear,” says Vincent Janssen, one of Oko’s three Belgium-based founders. “But we designed it that way because that’s what we knew how to do. It just happened to also be the right thing.”
ADA FACT SHEET
From left: Willem Van de Mierop, Michiel Janssen, and Vincent Janssen are the three cofounders of Oko. The app’s name means “eye.”
Oko- Winner: Inclusivity
- Team: AYES BV
- Available on: iPhone
- Team size: 6
- Previous accolades: 2024 App Store Award winner for Cultural Impact; App Store Editors’ Choice
Download Oko from the App Store
For Janssen and his cofounders, brother Michiel and longtime friend Willem Van de Mierop, Oko — the name translates to “eye” — was a passion project that came about during the pandemic. All three studied computer science with a concentration in AI, and had spent years working in their hometown of Antwerp. But by the beginning of 2021, the trio felt restless. “We all had full-time jobs,” says Janssen, “but the weekends were pretty boring.” Yet they knew their experience couldn’t compare to that of a longtime friend with low vision, who Janssen noticed was feeling more affected as the autumn and winter months went on.
“We really started to notice that he was feeling isolated more than others,” says Janssen. “Here in Belgium, we were allowed to go for walks, but you had to be alone or with your household. That meant he couldn’t go with a volunteer or guide. As AI engineers, that got us thinking, ‘Well, there are all these stories about autonomous vehicles. Could we come up with a similar system of images or videos that would help people find their way around public spaces?’”
I had maybe opened Xcode three times a few years before, but otherwise none of us had any iOS or Swift experience.
Vincent Janssen, Oko founder
The trio began building a prototype that consisted of a microcomputer, 3D-printed materials, and a small portable speaker borrowed from the Janssen brothers’ father. Today, Janssen calls it “hacky hardware,” something akin to a small computer with a camera. But it allowed the team and their friend — now their primary tester — to walk the idea around and poke at the technology’s potential. Could AI recognize the state of a pedestrian signal? How far away could it detect a Don’t Walk sign? How would it perform in rain or wind or snow? There was just one way to know. “We went out for long walks,” says Janssen.
And while the AI and hardware performed well in their road tests, issues arose around the hardware’s size and usability, and the team begin to realize that software offered a better solution. The fact that none of the three had the slightest experience building iOS apps was simply a hurdle to clear. “I had maybe opened Xcode three times a few years before,” says Janssen, “but otherwise none of us had any iOS or Swift experience.”
Oko helps people navigate pedestrian walkways through interactive maps and audio, visual, and haptic feedback.
So that summer, the team pivoted to software, quitting their full-time jobs and throwing themselves into learning Swift through tutorials, videos, and trusty web searches. The core idea crystallized quickly: Build a simple app that relied on Camera, the Maps SDK, and a powerful AI algorithm that could help people get around town. “Today, it’s a little more complex, but in the beginning the app basically opened up a camera feed and a Core ML model to process the images,” says Janssen, noting that the original model was brought over from Python. “Luckily, the tools made the conversion really smooth.” (Oko’s AI models run locally on device.)
With the software taking shape, more field testing was needed. The team reached out to accessibility-oriented organizations throughout Belgium, drafting a team of 100 or so testers to “codevelop the app,” says Janssen. Among the initial feedback: Though Oko was originally designed to be used in landscape mode, pretty much everyone preferred holding their phones in portrait mode. “I had the same experience, to be honest,” said Janssen, “but that meant we needed to redesign the whole thing.”
The Oko team navigates through prototypes at a review session in their hometown of Antwerp, Belgium.
Other changes included amending the audio feedback to more closely mimic existing real-world sounds, and addressing requests to add more visual feedback. The experience amounted to getting a real-world education about accessibility on the fly. “We found ourselves learning about VoiceOver and haptic feedback very quickly,” says Janssen.
Still, the project went remarkably fast — Oko launched on the App Store in December 2021, not even a year after the trio conceived of it. “It took a little while to do things, like make sure the UI wasn’t blocked, especially since we didn’t fully understand the code we wrote in Swift,” laughs Janssen, “but in the end, the app was doing what it needed to do.”
We found ourselves learning about VoiceOver and haptic feedback.
Vincent Janssen, Oko founder
The accessibility community took notice. And in the following months, the Oko team continued expanding its reach — Michiel Janssen and Van de Mierop traveled to the U.S. to meet with accessibility organizations and get firsthand experience with American street traffic and pedestrian patterns. But even as the app expanded, the team retained its focus on simplicity. In fact, Janssen says, they explored and eventually jettisoned some expansion ideas — including one designed to help people find and board public transportation — that made the app feel a little too complex.
Today, the Oko team numbers 6, including a fleet of developers who handle more advanced Swift matters. “About a year after we launched, we got feedback about extra features and speed improvements, and needed to find people who were better at Swift than we are,” laughs Janssen. At the same time, the original trio is now learning about business, marketing, and expansion.
At its core, Oko remains a sparkling example of a simple app that completes its task well. “It’s still a work in progress, and we’re learning every day,” says Janssen. In other words, there are many roads yet to cross.
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Get ready with the latest beta releases
The beta versions of iOS 18.3, iPadOS 18.3, macOS 15.3, tvOS 18.3, visionOS 2.3, and watchOS 11.3 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 16.2.
App Store Award winners announced
Join us in celebrating the outstanding work of these developers from around the world.
Updated Apple Developer Program License Agreement now available
Attachment 2 of the Apple Developer Program License Agreement has been amended to specify requirements for use of the In-App Purchase API. Please review the changes and accept the updated terms in your account.
View the full terms and conditions
Translations of the updated agreement will be available on the Apple Developer website within one month.
Hello Developer: December 2024
Get your apps and games ready for the holidays
The busiest season on the App Store is almost here. Make sure your apps and games are up to date and ready.
App Review will continue to accept submissions throughout the holiday season. Please plan to submit time-sensitive submissions early, as we anticipate high volume and reviews may take longer to complete from December 20-26.
App Store Award finalists announced
Every year, the App Store Awards celebrate exceptional apps and games that improve people's lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. This year, the App Store Editorial team is proud to recognize over 40 outstanding finalists. Winners will be announced in the coming weeks.
Price and tax updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax updates as of October:
Your proceeds from the sale of eligible apps and In‑App Purchases have been increased in:
- Nepal: Apple no longer remits Nepal value-added tax (VAT) for local developers and proceeds were increased accordingly.
- Kazakhstan: Apple no longer remits Kazakstan VAT for local developers and proceeds were increased accordingly.
- Madeira: Decrease of the Madeira VAT rate from 5% to 4% for news publications, magazines and other periodicals, books, and audiobooks.
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple will not remit VAT in Nepal and Kazakhstan for local developers.
Learn more about your proceeds
Price updates as of December 2:
- Pricing for apps and In-App Purchases will be updated for the Japan and Türkiye storefronts if you haven’t selected one of these as the base for your app or In‑App Purchases.
If you’ve selected the Japan or Türkiye storefront as the base for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Enhancements to the App Store featuring process
Share your app or game’s upcoming content and enhancements for App Store featuring consideration with new Featuring Nominations in App Store Connect. Submit a nomination to tell our team about a new launch, in-app content, or added functionality. If you’re featured in select placements on the Today tab, you’ll also receive a notification via the App Store Connect app.
In addition, you can promote your app or game’s biggest moments — such as an app launch, new version, or select featuring placements on the App Store — with readymade marketing assets. Use the App Store Connect app to generate Apple-designed assets and share them to your social media channels. Include the provided link alongside your assets so people can easily download your app or game on the App Store.
New Broadcast Push Notification Metrics Now Available in the Push Notifications Console
The Push Notifications Console now includes metrics for broadcast push notifications sent in the Apple Push Notification service (APNs) production environment. The console’s interface provides an aggregated view of the broadcast push notifications that are successfully accepted by APNs, the number of devices that receive them, and a snapshot of the maximum number of devices subscribed to your channels.
Coding in the kitchen: How Devin Davies whipped up the tasty recipe app Crouton
Let’s get this out of the way: Yes, Devin Davies is an excellent cook. “I’m not, like, a professional or anything,” he says, in the way that people say they’re not good at something when they are.
But in addition to knowing his way around the kitchen, Davies is also a seasoned developer whose app Crouton, a Swift-built cooking aid, won him the 2024 Apple Design Award for Interaction.
Crouton is part recipe manager, part exceptionally organized kitchen assistant. For starters, the app collects recipes from wherever people find them — blogs, family cookbooks, scribbled scraps from the ’90s, wherever — and uses tasty ML models to import and organize them. “If you find something online, just hit the Share button to pull it into Crouton,” says the New Zealand-based developer. “If you find a recipe in an old book, just snap a picture to save it.”
And when it’s time to start cooking, Crouton reduces everything to the basics by displaying only the current step, ingredients, and measurements (including conversions). There’s no swiping around between apps to figure out how many fl oz are in a cup; no setting a timer in a different app. It’s all handled right in Crouton. “The key for me is: How quickly can I get you back to preparing the meal, rather than reading?” Davies says.
ADA FACT SHEET
Crouton- Winner: Interaction
- Available on: iPhone, iPad, Mac, Apple Vision Pro, Apple Watch
- Team size: 1
Download Crouton from the App Store
Crouton is the classic case of a developer whipping up something he needed. As the de facto chef in the house, Davies had previously done his meal planning in the Notes app, which worked until, as he laughs, “it got a little out of hand.”
At the time, Davies was in his salad days as an iOS developer, so he figured he could build something that would save him a little time. (It’s in his blood: Davies’s father is a developer too.) "Programming was never my strong suit,” he says, “but once I started building something that solved a problem, I started thinking of programming as a means to an end, and that helped.”
Davies’s full-time job was his meal ticket, but he started teaching himself Swift on the side. Swift, he says, clicked a lot faster than the other languages he’d tried, especially as someone who was still developing a taste for programming. “It still took me a while to get my head into it,” he says, “but I found pretty early on that Swift worked the way I wanted a language to work. You can point Crouton at some text, import that text, and do something with it. The amount of steps you don’t have to think about is astounding.”
I found pretty early on that Swift worked the way I wanted a language to work.
Devin Davies, Crouton
Coding with Swift offered plenty of baked-in benefits. Davies leaned on platform conventions to make navigating Crouton familiar and easy. Lists and collection views took advantage of Camera APIs. VisionKit powered text recognition; a separate model organized imported ingredients by category.
“I could separate out a roughly chopped onion from a regular onion and then add the quantity using a Core ML model,” he says. “It’s amazing how someone like me can build a model to detect ingredients when I really have zero understanding of how it works.”
Davies designed Crouton with simplicity in mind at all times. “I spent a lot of time figuring out what to leave out rather than bring in,” he says.
The app came together quickly: The first version was done in about six months, but Crouton simmered for a while before finding its audience. “My mom and I were the main active users for maybe a year,” Davies laughs. “But it’s really important to build something that you use yourself — especially when you’re an indie — so there’s motivation to carry on.”
Davies served up Crouton updates for a few years, and eventually the app gained more traction, culminating with its Apple Design Award for Interaction at WWDC24. That’s an appropriate category, Davies says, because he believes his approach to interaction is his app’s special sauce. “My skillset is figuring out how the pieces of an app fit together, and how you move through them from point A to B to C,” he says. “I spent a lot of time figuring out what to leave out rather than bring in.”
Crouton recipes can be imported from blogs, cookbook, scraps of paper, or anywhere else they might be found.
Davies hopes to use the coming months to explore spicing up Crouton with Apple Intelligence, Live Activities on Apple Watch, and translation APIs. (Though Crouton is his primary app, he’s also built an Apple Vision Pro app called Plate Smash, which is presumably very useful for cooking stress relief.)
But it’s important to him that any new features or upgrades pair nicely with the current Crouton. “I’m a big believer in starting out with core intentions and holding true to them,” he says. “I don’t think that the interface, over time, has to be completely different.”
My skillset is figuring out how the pieces of an app fit together, and how you move through them from point A to B to C.
Devin Davies, Crouton
Because it’s a kitchen assistant, Crouton is a very personal app. It’s in someone’s kitchen at mealtime, it’s helping people prepare means for their loved ones, it’s enabling them to expand their culinary reach. It makes a direct impact on a person’s day. That’s a lot of influence to have as an app developer — even when a recipe doesn’t quite pan out.
“Sometimes I’ll hear from people who discover a bug, or even a kind of misunderstanding, but they’re always very kind about it,” laughs Davies. “They’ll tell me, ‘Oh, I was baking a cake for my daughter’s birthday, and I put in way too much cream cheese and I ruined it. But, great app!’”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Hello Developer: November 2024
In this edition: The Swift Pathway, new developer activities around the world, and an interview with the creator of recipe app Crouton.
Upcoming changes to the App Store Receipt Signing Intermediate Certificate
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate is being updated to use the SHA-256 cryptographic algorithm. This certificate is used to sign App Store receipts, which are the proof of purchase for apps and In-App Purchases.
This update is being completed in multiple phases and some existing apps on the App Store may be impacted by the next update, depending on how they verify receipts.
Starting January 24, 2025, if your app performs on-device receipt validation and doesn't support a SHA-256 algorithm, your app will fail to validate the receipt. If your app prevents customers from accessing the app or premium content when receipt validation fails, your customers may lose access to their content.
If your app performs on-device receipt validation, update your app to support certificates that use the SHA-256 algorithm; alternatively, use the AppTransaction and Transaction APIs to verify App Store transactions.
For more details, view TN3138: Handling App Store receipt signing certificate change.
TestFlight enhancements to help you reach testers
Beta testing your apps, games, and App Clips is even better with new enhancements to TestFlight. Updates include:
- Redesigned invitations. TestFlight invitations now include your beta app description to better highlight new features and content your app or game offers to prospective testers. Apps and games with an approved version that’s ready for distribution can also include their screenshots and app category in their invite. We’ve also added a way for people to leave feedback if they didn’t join your beta, so you can understand why they didn’t participate.
- Tester enrollment criteria. You can choose to set criteria, such as device type and OS versions, to more easily enroll qualified testers via a public link to provide more relevant feedback on your invite.
- Public link metrics. Find out how successful your public link is at enrolling testers for your app with new metrics. Understand how many testers viewed your invite in the TestFlight app and chose to accept it. If you’ve set criteria for the public link, you can also view how many testers didn’t meet the criteria.
To get started with TestFlight, upload your build, add test information, and invite testers.
Get ready with the latest beta releases
The beta versions of iOS 18.2, iPadOS 18.2, and macOS 15.2 are now available. Get your apps ready by confirming they work as expected on these releases. And make sure to build and test with Xcode 16.2 beta to take advantage of the advancements in the latest SDKs.
As previewed earlier this year, changes to the browser choice screen, default apps, and app deletion for EU users, as well as support in Safari for exporting user data and for web browsers to import that data, are now available in the beta versions of iOS 18.2 and iPadOS 18.2.
These releases also include improvements to the Apps area in Settings first introduced in iOS 18 and iPadOS 18. All users worldwide will be able to manage their default apps via a Default Apps section at the top of the Apps area. New calling and messaging defaults are also now available for all users worldwide.
Following feedback from the European Commission and from developers, in these releases developers can develop and test EU-specific features, such as alternative browser engines, contactless apps, marketplace installations from web browsers, and marketplace apps, from anywhere in the world. Developers of apps that use alternative browser engines can now use WebKit in those same apps.
View details about the browser choice screen, how to make an app available for users to choose as a default, how to create a calling or messaging app that can be a default, and how to import user data from Safari.
Updated agreements now available
The Apple Developer Program License Agreement and its Schedules 1, 2, and 3 have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms in your account.
Apple Developer Program License Agreement
- Definitions, Section 3.3.3(J): Specified requirements for use of App Intents.
- Definitions, Section 3.3.5(C): Clarified requirements for use of Sign in With Apple.
- Definitions, Section 3.3.8(G): Specified requirements for use of the Critical Messaging API.
- Definitions, Sections 3.3.9(C): Clarified requirements for use of the Apple Pay APIs; updated definition of “Apple” for use of the Apple Pay APIs.
- Attachment 2: Clarified requirements for use of the In-App Purchase API.
Schedules 1, 2, and 3
Apple Services Pte. Ltd. is now the Apple legal entity responsible for the marketing and End-User download of the Licensed and Custom Applications by End-Users located in the following regions:
- Bhutan
- Brunei
- Cambodia
- Fiji
- Korea
- Laos
- Macau
- Maldives
- Micronesia, Fed States of
- Mongolia
- Myanmar
- Nauru
- Nepal
- Papua New Guinea
- Palau
- Solomon Islands
- Sri Lanka
- Tonga
- Vanuatu
Paid Applications Agreement (Schedules 2 and 3)
Exhibit B: Indicated that Apple shall not collect and remit taxes for local developers in Nepal and Kazakhstan, and such developers shall be solely responsible for the collection and remittance of such taxes as may be required by local law.
Exhibit C:
- Section 6: Clarified that Apple will apply Korean VAT on the commissions payable by Korean developers to Apple to be deducted from remittance with respect to sales to Korean customers pursuant to local tax laws.
- Section 10: For Singaporean developers who have registered for Singapore GST and have provided their Singapore GST registration number to Apple, clarified that Apple will apply Singaporean GST on the commissions payable by Singaporean developers to Apple to be deducted from remittance with respect to sales to Singaporean customers pursuant to local tax laws.
View the full terms and conditions
Translations of the Apple Developer Program License Agreement will be available on the Apple Developer website within one month.
New requirement for app updates in the European Union
Starting today, in order to submit updates for apps on the App Store in the European Union (EU) Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect. If you’re a trader, you’ll need to provide your trader information before you can submit your app for review.
Starting February 17, 2025, apps without trader status will be removed from the App Store in the EU until trader status is provided and verified in order to comply with the Digital Services Act.
Apple Push Notification service server certificate update
The Certification Authority (CA) for Apple Push Notification service (APNs) is changing. APNs will update the server certificates in sandbox on January 20, 2025, and in production on February 24, 2025. All developers using APNs will need to update their application’s Trust Store to include the new server certificate: SHA-2 Root : USERTrust RSA Certification Authority certificate.
To ensure a smooth transition and avoid push notification delivery failures, please make sure that both old and new server certificates are included in the Trust Store before the cut-off date for each of your application servers that connect to sandbox and production.
At this time, you don’t need to update the APNs SSL provider certificates issued to you by Apple.
Hello Developer: October 2024
Get your app up to speed, meet the team behind Lies of P, explore new student resources, and more.
Masters of puppets: How ROUND8 Studio carved out a niche for Lies of P
Lies of P is closer to its surprising source material than you might think.
Based on Carlo Collodi’s 1883 novel The Adventures of Pinocchio, the Apple Design Award-winning game is a macabre reimagining of the story of a puppet who longs to be a real boy. Collodi’s story is still best known as a children’s fable. But it’s also preprogrammed with more than its share of darkness — which made it an appealing foundation for Lies of P director Jiwon Choi.
“When we were looking for stories to base the game on, we had a checklist of needs,” says Choi. “We wanted something dark. We wanted a story that was familiar but not entirely childish. And the deeper we dove into Pinocchio, the more we found that it checked off everything we were looking for.”
ADA FACT SHEET
Lies of P- Winner: Visuals and Graphics
- Team: ROUND8 Studio (developer), NEOWIZ (publisher)
- Available on: Mac
- Team size: 100
- Previous accolades: App Store 2023 Mac Game of the Year, App Store Editors’ Choice
Developed by the South Korea-based ROUND8 Studio and published by its parent company, NEOWIZ, Lies of P is a lavishly rendered dark fantasy adventure and a technical showpiece for Mac with Apple silicon. Yes, players control a humanoid puppet created by Geppetto. But instead of a little wooden boy with a penchant for little white lies, the game’s protagonist is a mechanical warrior with an array of massive swords and a mission to battle through the burned-out city of Krat to find his maker — who isn’t exactly the genial old woodcarver from the fable.
“The story is well-known, and so are the characters,” says Choi. “We knew that to create a lasting memory for gamers, we had to add our own twists.”
In the burned-out world of Lies of P, something this warm and beautiful can’t be good news.
Those twists abound. The puppet is accompanied by a digital lamp assistant named Gemini — pronounced “jim-i-nee,” of course. A major character is a play on the original’s kindly Blue Fairy. A game boss named Mad Donkey is a lot more irritable than the donkeys featured in Collodi’s story. And though nobody’s nose grows in Lies of P, characters have opportunities to lie in a way that directly affects the storyline — and potentially one of the game’s multiple endings.
We knew that to create a lasting memory for gamers, we had to add our own twists.
Jiwon Choi, Lies of P director
“If you play without knowing the original story, you might not catch all those twists,” says Choi. “But it goes the other way, too. We’ve heard from players who became curious about the original story, so they went back and found out about our twists that way.”
There’s nothing curious about the game’s success: In addition to winning a 2024 Apple Design Award for Visuals and Graphics, Lies of P was named the App Store’s 2023 Mac Game of the Year and has collected a bounty of accolades from the gaming community. Many of those call out the game’s visual beauty, a world of rich textures, detailed lighting, and visual customization options like MetalFX upscaling and volumetric fog effects that let you style the ruined city to your liking.
Many of Collodi’s original characters have been updated for Lies of P, including the Black Rabbit Brotherhood, who appear to be hopping mad.
For that city, the ROUND8 team added another twist by moving the story from its original Italian locale to the Belle Èpoque era of pre-WWI France. “Everyone expected Italy, and everyone expected steampunk,” says Choi, “but we wanted something that wasn’t quite as common in the gaming industry. We considered a few other locations, like the wild west, but the Belle Èpoque was the right mix of beauty and prosperity. We just made it darker and gloomier.”
We considered a few other locations, like the wild west, but the Belle Èpoque was the right mix of beauty and prosperity. We just made it darker and gloomier.
Jiwon Choi, Lies of P director
To create the game’s fierce (and oily) combat, Choi and the team took existing Soulslike elements and added their own touches, like customizable weapons that can be assembled from items lying around Krat. “We found that players will often find a weapon they like and use it until the ending,” says Choi. “We found that inefficient. But we also know that everyone has a different taste for weapons.”
The system, he says, gives players the freedom to choose their own combinations instead of pursuing a “best” pre-ordained weapon. And the strategy worked: Choi says players are often found online discussing the best combinations rather than the best weapons. “That was our intention when creating the system,” he says.
The game is set in the Belle Èpoque, an era known for its beauty and prosperity. “We just made it darker and gloomier,” says Choi.
Also intentional: The game’s approach to lying, another twist on the source material. “Lying in the game isn’t just about deceiving a counterpart,” says Choi. “Humans are the only species who can lie to one another, so lying is about exploring the core of this character.”
It’s also about the murky ethics of lying: Lies of P suggests that, at times, nothing is as human — or humane — as a well-intentioned falsehood.
“The puppet of Geppetto is not human,” says Choi. “But at the same time, the puppet acts like a human and occasionally exhibits human behavior, like getting emotional listening to music. The idea was: Lying is something a human might do. That’s why it’s part of the game.”
Every environment in Lies of P — including the Krat Festival, which has seen better days — is rich with desolate detail.
The Lies of P story might not be done just yet. Choi and team are working on downloadable content and a potential sequel — possibly starring another iconic character who’s briefly teased in the game’s ending. But in the meantime, the team is taking a moment to enjoy the fruits of their success. “At the beginning of development, I honestly doubted that we could even pull this off,” says Choi. “For me, the most surprising thing is that we achieved this. And that makes us think, ‘Well, maybe we could do better next time.’”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Announcing the Swift Student Challenge 2025
We’re thrilled to announce the Swift Student Challenge 2025. The Challenge provides the next generation of student developers the opportunity to showcase their creativity and coding skills by building app playgrounds with Swift.
Applications for the next Challenge will open in February 2025 for three weeks.
We’ll select 350 Swift Student Challenge winners whose submissions demonstrate excellence in innovation, creativity, social impact, or inclusivity. From this esteemed group, we’ll name 50 Distinguished Winners whose work is truly exceptional and invite them to join us at Apple in Cupertino for three incredible days where they’ll gain invaluable insights from Apple experts and engineers, connect with their peers, and enjoy a host of unforgettable experiences.
All Challenge winners will receive one year of membership in the Apple Developer Program, a special gift from Apple, and more.
To help you get ready, we’re launching new coding resources, including Swift Coding Clubs designed for students to develop skills for a future career, build community, and get ready for the Challenge.
Upcoming regional age ratings in Australia and France
Apple is committed to making the App Store a safe place for everyone — especially kids. Within the next few months, the following regional age ratings for Australia and France will be implemented in accordance with local laws. No action is needed on your part. Where required by local regulations, regional ratings will appear alongside Apple global age ratings.
Australia
Apps with any instances of simulated gambling will display an R18+ regional age rating in addition to the Apple global age rating on the App Store in Australia.
France
Apps with a 17+ Apple global age rating will also display an 18+ regional age rating on the App Store in France.
Update on iPadOS 18 apps distributed in the European Union
The App Review Guidelines have been revised to add iPadOS to Notarization.
Starting September 16:
- Users in the EU can download iPadOS apps on the App Store and through alternative distribution. As mentioned in May, if you have entered into the Alternative Terms Addendum for Apps in the EU, iPadOS first annual installs will begin to accrue and the lower App Store commission rate will apply.
- Alternative browser engines can be used in iPadOS apps.
- Historical App Install Reports in App Store Connect that can be used with our fee calculator will include iPadOS.
If you’ve entered into a previous version of the following agreements, be sure to sign the latest version, which supports iPadOS:
- Alternative Terms Addendum for Apps in the EU
- Web Browser Engine Entitlement Addendum for Apps in the EU
- Embedded Browser Engine Entitlement Addendum for Apps in the EU
Learn more about the update on apps distributed in the EU
Translations of the guidelines will be available on the Apple Developer website within one month.
Win-back offers for auto-renewable subscriptions now available
You can now configure win-back offers — a new type of offer for auto-renewable subscriptions — in App Store Connect. Win-back offers allow you to reach previous subscribers and encourage them to resubscribe to your app or game. For example, you can create a pay up front offer for a reduced subscription price of $9.99 for six months, with a standard renewal price of $39.99 per year. Based on your offer configuration, Apple displays these offers to eligible customers in various places, including:
- Across the App Store, including on your product page, as well as in personalized recommendations and editorial selections on the Today, Games, and Apps tabs.
- In your app or game.
- Via a direct link you share using your own marketing channels.
- In Subscription settings.
When creating win-back offers in App Store Connect, you’ll determine customer eligibility, select regional availability, and choose the discount type. Eligible customers will be able to discover win-back offers this fall.
App Store submissions now open for the latest OS releases
iOS 18, iPadOS 18, macOS Sequoia, tvOS 18, visionOS 2, and watchOS 11 will soon be available to customers worldwide. Build your apps and games using the Xcode 16 Release Candidate and latest SDKs, test them using TestFlight, and submit them for review to the App Store. You can now start deploying seamlessly to TestFlight and the App Store from Xcode Cloud. With exciting new features like watchOS Live Activities, app icon customization, and powerful updates to Swift, Siri, Controls, and Core ML, you can deliver even more unique experiences on Apple platforms.
And beginning next month, you’ll be able to bring the incredible new features of Apple Intelligence into your apps to help inspire the way users communicate, work, and express themselves.
Starting April 2025, apps uploaded to App Store Connect must be built with SDKs for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.
Hello Developer: September 2024
Get your apps ready by digging into these video sessions and resources.
Explore machine learning on Apple platforms Watch now Bring expression to your app with Genmoji Watch now Browse new resourcesLearn how to make actions available to Siri and Apple Intelligence.
Need a boost?Check out our curated guide to machine learning and AI.
FEATURED
Get ready for OS updatesDive into the latest updates with these developer sessions.
Level up your games Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Bring your vision to life Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Upgrade your iOS and iPadOS apps Extend your app’s controls across the system Watch now Elevate your tab and sidebar experience in iPadOS Watch nowBrowse Apple Developer on YouTube
Get expert guidanceCheck out curated guides to the latest features and technologies.
BEHIND THE DESIGN
Rytmos: A puzzle game with a global beatFind out how Floppy Club built an Apple Design Award winner that sounds as good as it looks.
Behind the Design: The rhythms of Rytmos View nowMEET WITH APPLE
Reserve your spot for upcoming developer activities- Envision the future: Create great apps for visionOS: Find out how to build visionOS apps for a variety of use cases. (October 2, Cupertino)
- Build faster and more efficient apps: Learn how to optimize your use of Apple frameworks, resolve performance issues, and reduce launch time. (October 23, Cupertino)
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Behind the Design: The rhythms of Rytmos
Rytmos is a game that sounds as good as it looks.
With its global rhythms, sci-fi visuals, and clever puzzles, the 2024 Apple Design Award winner for Interaction is both a challenge and an artistic achievement. To solve each level, players must create linear pathways on increasingly complex boards, dodging obstacles and triggering buttons along the way. It’s all set to a world-music backdrop; different levels feature genres as diverse as Ethiopian jazz, Hawaiian slack key guitar, and Gamelan from Indonesia, just to name a few.
And here’s the hook: Every time you clear a level, you add an instrument to an ever-growing song.
“The idea is that instead of reacting to the music, you’re creating it,” says Asger Strandby, cofounder of Floppy Club, the Denmark-based studio behind Rytmos. “We do a lot to make sure it doesn’t sound too wild. But the music in Rytmos is entirely generated by the way you solve the puzzles.”
ADA FACT SHEET
Rytmos- Winner: Interaction
- Team: Floppy Club
- Available on: iPhone, iPad
- Team size: 5
Download Rytmos from the App Store
The artful game is the result of a partnership that dates back decades. In addition to being developers, Strandby and Floppy Club cofounder Niels Böttcher are both musicians who hail from the town of Aarhus in Denmark. “It’s a small enough place that if you work in music, you probably know everyone in the community,” laughs Böttcher.
The music in Rytmos comes mostly from traveling and being curious.
Niels Böttcher, Floppy Club cofounder
The pair connected back in the early 2000s, bonding over music more than games. “For me, games were this magical thing that you could never really make yourself,” says Strandby. “I was a geeky kid, so I made music and eventually web pages on computers, but I never really thought I could make games until I was in my twenties.” Instead, Strandby formed bands like Analogik, which married a wild variety of crate-digging samples — swing music, Eastern European folk, Eurovision-worthy pop — with hip-hop beats. Strandby was the frontman, while Böttcher handled the behind-the-scenes work. “I was the manager in everything but name,” he says.
The band was a success: Analogik went on to release five studio albums and perform at Glastonbury, Roskilde, and other big European festivals. But when their music adventure ended, the pair moved back into separate tech jobs for several years — until the time came to join forces again. “We found ourselves brainstorming one day, thinking about, ‘Could we combine music and games in some way?’” says Böttcher. “There are fun similarities between the two in terms of structures and patterns. We thought, ‘Well, let’s give it a shot.’”
Puzzles in Rytmos — like the one set on the planet “Hateta” — come with a little history lesson about the music being played.
The duo launched work on a rhythm game that was powered by their histories and travels. “I’ve collected CDs and tapes from all over the world, so the genres in Rytmos are very carefully chosen,” says Böttcher. “We really love Ethiopian jazz music, so we included that. Gamelan music (traditional Indonesian ensemble music that’s heavy on percussion) is pretty wild, but incredible. And sometimes, you just hear an instrument and say, ‘Oh, that tabla has a really nice sound.’ So the music in Rytmos comes mostly from traveling and being curious.”
The game took shape early, but the mazes in its initial versions were much more intricate. To help bring them down to a more approachable level, the Floppy Club team brought on art director Niels Fyrst. “He was all about making things cleaner and clearer,” says Böttcher. “Once we saw what he was proposing — and how it made the game stronger — we realized, ‘OK, maybe we’re onto something.’”
Success in Rytmos isn't just that you're beating a level. It's that you're creating something.
Asger Strandby, Floppy Club cofounder
Still, even with a more manageable set of puzzles, a great deal of design complexity remained. Building Rytmos levels was like stacking a puzzle on a puzzle; the team not only had to build out the levels, but also create the music to match. To do so, Strandby and his brother, Bo, would sketch out a level and then send it over to Böttcher, who would sync it to music — a process that proved even more difficult than it seems.
“The sound is very dependent on the location of the obstacles in the puzzles,” says Strandby. “That’s what shapes the music that comes out of the game. So we’d test and test again to make sure the sound didn’t break the idea of the puzzle.”
Puzzles in Rytmos are all about getting from Point A to Point B — but things are never as simple as they seem.
The process, he says, was “quite difficult” to get right. “Usually with something like this, you create a loop, and then maybe add another loop, and then add layers on top of it,” says Böttcher. “In Rytmos, hitting an emitter triggers a tone, percussion sound, or chord. One tone hits another tone, and then another, and then another. In essence, you’re creating a pattern while playing the game.”
We’ve actually gone back to make some of the songs more imprecise, because we want them to sound human.
Niels Böttcher, Floppy Club cofounder
The unorthodox approach leaves room for creativity. “Two different people’s solutions can sound different,” says Strandby. And when players win a level, they unlock a “jam mode” where they can play and practice freely. "It’s just something to do with no rules after all the puzzling,” laughs Strandby.
Yet despite all the technical magic happening behind the scenes, the actual musical results had to have a human feel. “We’re dealing with genres that are analog and organic, so they couldn’t sound electronic at all,” says Böttcher. “We’ve actually gone back to make some of the songs more imprecise, because we want them to sound human.”
Best of all, the game is shot through with creativity and cleverness — even offscreen. Each letter in the Rytmos logo represents the solution to a puzzle. The company’s logo is a 3.5-inch floppy disk, a little nod to their first software love. (“That’s all I wished for every birthday,” laughs Böttcher.) And both Böttcher and Strandby hope that the game serves as an introduction to both sounds and people they might not be familiar with. "Learning about music is a great way to learn about a culture,” says Strandby.
But mostly, Rytmos is an inspirational experience that meets its lofty goal. “Success in Rytmos isn’t just that you’re beating a level,” says Strandby. “It’s that you’re creating something.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Price and tax updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Price updatesOn September 16:
- Pricing for apps and In-App Purchases¹ will be updated for the Chile, Laos, and Senegal storefronts if you haven’t selected one of these as the base for your app or In‑App Purchase.¹ These updates also consider value‑added tax (VAT) introductions listed in the “Tax updates” section below.
If you’ve selected the Chile, Laos, or Senegal storefront as the base for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by region
Set a price for an In-App Purchase
Tax updatesAs of August 29:
Your proceeds from the sale of eligible apps and In‑App Purchases have been modified in:
- Laos: VAT introduction of 10%
- Senegal: VAT introduction of 18%
- India: Equalization levy of 2% no longer applicable
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Laos and Senegal.
Beginning in September:
Your proceeds from the sale of eligible apps and In‑App Purchases will be modified in:
- Canada: Digital services tax introduction of 3%
- Finland: VAT increase from 24% to 25.5%
Learn more about your proceeds
1: Excludes auto-renewable subscriptions.
It’s Glowtime.
Join us for a special Apple Event on September 9 at 10 a.m. PT.
Watch on apple.com, Apple TV, or YouTube Live.
Upcoming changes to the browser choice screen, default apps, and app deletion for EU users
By the end of this year, we’ll make changes to the browser choice screen, default apps, and app deletion for iOS and iPadOS for users in the EU. These updates come from our ongoing and continuing dialogue with the European Commission about compliance with the Digital Market Act’s requirements in these areas.
Developers of browsers offered in the browser choice screen in the EU will have additional information about their browser shown to users who view the choice screen, and will get access to more data about the performance of the choice screen. The updated choice screen will be shown to all EU users who have Safari set as their default browser. For details about the changes coming to the browser choice screen, view About the browser choice screen in the EU.
For users in the EU, iOS 18 and iPadOS 18 will also include a new Default Apps section in Settings that lists defaults available to each user. In future software updates, users will get new default settings for dialing phone numbers, sending messages, translating text, navigation, managing passwords, keyboards, and call spam filters. To learn more, view Update on apps distributed in the European Union.
Additionally, the App Store, Messages, Photos, Camera, and Safari apps will now be deletable for users in the EU.
Upcoming requirements for app distribution in the European Union
As a reminder, Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect for apps on the App Store in the European Union (EU) in order to comply with the Digital Services Act.
Please note these new dates and requirements:
- October 16, 2024: Trader status will be required to submit app updates. If you’re a trader, you’ll need to provide your trader information before you can submit your app for review.
- February 17, 2025: Apps without trader status will be removed from the App Store in the EU until trader status is provided and verified.
Apple Entrepreneur Camp applications are now open
Apple Entrepreneur Camp supports underrepresented founders and developers, and encourages the pipeline and longevity of these entrepreneurs in technology. Attendees benefit from one-on-one code-level guidance, receive unprecedented access to Apple engineers and experts, and become part of the extended global network of Apple Entrepreneur Camp alumni.
Applications are now open for female,* Black, Hispanic/Latinx, and Indigenous founders and developers. And this year we’re thrilled to bring back our in-person programming at Apple in Cupertino. For those who can’t attend in person, we’re still offering our full program online. We welcome established entrepreneurs with app-driven businesses to learn more about eligibility requirements and apply today.
Apply by September 3, 2024.
* Apple believes that gender expression is a fundamental right. We welcome all women to apply to this program.
Updates to the StoreKit External Purchase Link Entitlement
In response to the announcement by the European Commission in June, we’re making the following changes to Apple’s Digital Markets Act compliance plan. We’re introducing updated terms that will apply this fall for developers with apps in the European Union storefronts of the App Store that use the StoreKit External Purchase Link Entitlement. Key changes include:
- Developers can communicate and promote offers for purchases available at a destination of their choice. The destination can be an alternative app marketplace, another app, or a website, and it can be accessed outside the app or via a web view that appears in the app.
- Developers may design and execute within their apps the communication and promotion of offers. This includes providing information about prices of subscriptions or any other offer available both within or outside the app, and providing explanations or instructions about how to subscribe to offers outside the application. These communications must provide accurate information regarding the digital goods or services available for purchase.
- Developers may choose to use an actionable link that can be tapped, clicked, or scanned, to take users to their destination.
- Developers can use any number of URLs, without declaring them in the app’s Info.plist.
- Links with parameters, redirects, and intermediate links to landing pages are permitted.
- Updated business terms for apps with the External Purchase Link Entitlement are being introduced to align with the changes to these capabilities.
Learn more by visiting Alternative payment options on the App Store in the European Union or request a 30-minute online consultation to ask questions and provide feedback about these changes.
Hello Developer: August 2024
Meet with Apple
Explore the latest developer activities — including sessions, consultations, and labs — all around the world.
BEHIND THE DESIGN
Creating the make-believe magic of Lost in PlayDiscover how the developers of this Apple Design Award-winning game conjured up an imaginative world of oversized frogs, mischievous gnomes, and occasional pizzas.
Behind the Design: Creating the make-believe magic of Lost in Play View now Get resourceful- Build local experiences with room tracking: Use room tracking in visionOS to provide custom interactions with physical spaces.
- Preview your app’s interface in Xcode: Iterate designs quickly and preview your apps’ displays across different Apple devices.
- Explore Apple Music Feed: Now available through the Apple Developer Program, Apple Music Feed provides bulk rich catalog metadata for developing experiences that link back to Apple Music.
- Updates to runtime protection in macOS Sequoia: Find out about updates to Gatekeeper.
- Evaluate your app’s performance: Find out what’s working — and what you can improve — with peer group benchmark metrics across app categories, business models, and download volumes.
SESSION OF THE MONTH
Extend your Xcode Cloud workflowsDiscover how Xcode Cloud can adapt to your development needs.
Extend your Xcode Cloud workflows Watch now Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Behind the Design: Creating the make-believe magic of Lost in Play
Lost in Play is a game created by and for people who love to play make-believe.
The 2024 Apple Design Award (ADA) winner for Innovation is a point-and-click adventure that follows two young siblings, Toto and Gal, through a beautifully animated world of forbidden forests, dark caverns, friendly frogs, and mischievous gnomes. To advance through the game’s story, players complete fun mini-games and puzzles, all of which feel like a Saturday morning cartoon: Before the journey is out, the pair will fetch a sword from a stone, visit a goblin village, soar over the sea on an enormous bird, and navigate the real-world challenges of sibling rivalry. They will also order several pizzas.
ADA FACT SHEET
Lost in Play- Winner: Innovation
- Team: Happy Juice Games, Israel
- Available on: iPhone, iPad
- Team size: 7
- Previous accolades: iPad Game of the Year (2023)
Lost in Play is the brainchild of Happy Juice Games, a small Israel-based team whose three cofounders drew inspiration from their own childhoods — and their own families. “We’ve all watched our kids get totally immersed playing make-believe games,” says Happy Juice’s Yuval Markovich. “We wanted to recreate that feeling. And we came up with the idea of kids getting lost, partly in their imaginations, and partly in real life.”
The team was well-equipped for the job. Happy Juice cofounders Markovich, Oren Rubin, and Alon Simon, all have backgrounds in TV and film animation, and knew what they wanted a playful, funny adventure even before drawing their first sketch. “As adults, we can forget how to enjoy simple things like that,” says Simon, “so we set out to make a game about imagination, full of crazy creatures and colorful places.”
Toto meets a new friend in the belly of a whale in Lost in Play. At right is an early sketch of the scene.
For his part, Markovich didn’t just have a history in gaming; he taught himself English by playing text-based adventure games in the ‘80s. “You played those games by typing ‘go north’ or ‘look around,’ so every time I had to do something, I’d open the dictionary to figure out how to say it,” he laughs. “At some point I realized, ‘Oh wait, I know this language.’”
The story became a matter of, ‘OK, a goblin village sounds fun — how do we get there?’
Yuval Markovich, Happy Juice Games cofounder
But those games could be frustrating, as anyone who ever tried to “leave house” or “get ye flask” can attest. Lost in Play was conceived from day one to be light and navigable. “We wanted to keep it comic, funny, and easy,” says Rubin. “That’s what we had in mind from the very beginning.”
Toto must go out on a limb to solve the ravens' puzzle in this screenshot and early sketch.
Lost in Play may be a linear experience — it feels closer to playing a movie than a sandbox game — but it’s hardly simple. As befitting a playable dream, its story feels a little unmoored, like it’s being made up on the fly. That’s because the team started with art, characters, and environments, and then went back to add a hero’s journey to the elements.
“We knew we’d have a dream in the beginning that introduced a few characters. We knew we’d end up back at the house. And we knew we wanted one scene under the sea, and another in a maker space, and so on,” says Markovich. “The story became a matter of, ‘OK, a goblin village sounds fun — how do we get there?’”
Early concept sketches show the character design evolution of Toto and Gal.
Naturally, the team drew on their shared backgrounds in animation to shape the game all throughout its three-year development process — and not just in terms of art. Like a lot of cartoons, Lost in Play has no dialogue, both to increase accessibility and to enhance the story’s illusion. Characters speak in a silly gibberish. And there are little cartoon-inspired tricks throughout; for instance, the camera shakes when something is scary. “When you study animation, you also study script writing, cinematography, acting, and everything else,” Markovich says. “I think that’s why I like making games so much: They have everything.”
The best thing we hear is that it’s a game parents enjoy playing with their kids.
Oren Rubin, Happy Juice games cofounder
And in a clever acknowledgment of the realities of childhood, brief story beats return Toto and Gal to the real world to navigate practical issues like sibling rivalries. That’s on purpose: Simon says early versions of the game were maybe a little too cute. “Early on, we had the kids sleeping neatly in their beds,” says Simon. “But we decided that wasn’t realistic. We added a bit more of them picking on each other, and a conflict in the middle of the game.” Still, Markovich says that even the real-world interludes keep one foot in the imaginary world. “They may go through a park where an old lady is feeding pigeons, but then they walk left and there’s a goblin in a swamp,” he laughs.
Strange frogs distributing swords are the basis for one of Lost in Play's many inventive puzzles.
On the puzzle side, Lost in Play’s mini-games are designed to strike the right level of challenging. The team is especially proud of the game’s system of hints, which often present challenges in themselves. “We didn’t want people getting trapped like I did in those old adventure games,” laughs Markovich. “I loved those, but you could get stuck for months. And we didn’t want people going online to find answers either.” The answer: A hint system that doesn’t just hand over the answer but gives players a feeling of accomplishment, an incentive to go back for more.
It all adds up to a unique experience for players of all ages — and that’s by design too. “The best feedback we get is that it’s suitable for all audiences,” says Rubin, “and the best thing we hear is that it’s a game parents enjoy playing with their kids.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Updates to runtime protection in macOS Sequoia
In macOS Sequoia, users will no longer be able to Control-click to override Gatekeeper when opening software that isn’t signed correctly or notarized. They’ll need to visit System Settings > Privacy & Security to review security information for software before allowing it to run.
If you distribute software outside of the Mac App Store, we recommend that you submit your software to be notarized. The Apple notary service automatically scans your Developer ID-signed software and performs security checks. When your software is ready for distribution, it’s assigned a ticket to let Gatekeeper know it’s been notarized so customers can run it with confidence.
Updated guidelines now available
The App Review Guidelines have been revised to support updated policies and upcoming features, and to provide clarification.
- Updated 4.7 to clarify that PC emulator apps can offer to download games.
- Added 4.7, 4.7.2, and 4.7.3 to Notarization.
View the App Review Guidelines
Get resources and support to prepare for App Review
Translations of the guidelines will be available on the Apple Developer website within one month.
Hello Developer: July 2024
Dive into all the new updates from WWDC24
Our doors are open. Join us to explore all the new sessions, documentation, and features through online and in-person activities held in 15 cities around the world.
Join us July 22–26 for online office hours to get one-on-one guidance about your app or game. And visit the forums where more engineers are ready to answer your questions.
WWDC24 highlights View nowBEHIND THE DESIGN
Positive vibrations: How Gentler Streak approaches fitness with “humanity”Find out why the team behind this Apple Design Award-winning lifestyle app believes success is about more than stats.
Behind the Design: How Gentler Streak approaches fitness with “humanity“ View nowGET RESOURCEFUL
New sample code- Grow your skills with the BOT-anist: Build a multiplatform app that uses windows, volumes, and animations to create a robot botanist’s greenhouse.
- Doing the things a particle can: Add a range of visual effects to a RealityKit view by attaching a particle emitter component to an entity.
- Chart a course for Destination Video: Leverage SwiftUI to build an immersive media experience.
- Design for games: Make your game feel at home on all Apple devices.
- Take control of controls: Provide quick access to a feature of your app from Control Center, the Lock Screen, or the Action button.
- Tint your icons: Create dark and tinted app icon variants for iOS and iPadOS.
SESSION OF THE MONTH
Say hello to the next generation of CarPlay design systemLearn how the system at the heart of CarPlay allows each automaker to express their vehicle’s character and brand.
Say hello to the next generation of CarPlay design system Watch now Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Behind the Design: How Gentler Streak approaches fitness with “humanity“
Gentler Streak is a different kind of fitness tracker. In fact, to hear cofounder and CEO Katarina Lotrič tell it, it’s not really a fitness tracker at all.
“We think of it more as a lifestyle app,” says Lotrič, from the team’s home office in Kranj, Slovenia. “We want it to feel like a compass, a reminder to get moving, no matter what that means for you,” she says.
ADA FACT SHEET
The app’s “Go Gentler” page suggests optimal workouts for a user’s day.
Gentler Streak- Winner: Social Impact
- Team: Gentler Stories d.o.o., Slovenia
- Available on: iPhone, iPad, Apple Watch
- Team size: 8
- Previous accolades: Apple Watch App of the Year (2022), Apple Design Award finalist (Visuals and graphics, 2023)
Download Gentler Streak from the App Store
Learn more about Gentler Streak
Meet the 2024 Apple Design Award winners
That last part is key. True to its name, the Apple Design Award-winning Gentler Streak takes a friendlier approach to fitness. Instead of focusing on performance — on the bigger, faster, and stronger — Gentler Streak meets people where they are, presenting workout suggestions, statistics, and encouragement for all skill levels.
“A lot of mainstream fitness apps can seem to be about pushing all the time,” Lotrič says. “But for a lot of people, that isn’t the reality. Everyone has different demands and capabilities on different days. We thought, ‘Can we create a tool to help anyone know where they’re at on any given day, and guide them to a sustainably active lifestyle?’”
If a 15-minute walk is what your body can do at that moment, that’s great.
Katarina Lotrič, CEO and cofounder of Gentler Stories
To reach those goals, Lotrič and her Gentler Stories cofounders — UI/UX designer Andrej Mihelič, senior developer Luka Orešnik, and CTO and iOS developer Jasna Krmelj — created an app powered by an optimistic and encouraging vibe that considers physical fitness and mental well-being equally.
Fitness and workout data (collected from HealthKit) is presented in a colorful, approachable design. The app’s core functions are available for free; a subscription unlocks premium features. And an abstract mascot named Yorhart (sound it out) adds to the light touch. “Yorhart helps you establish a relationship with the app and with yourself, because it’s what your heart would be telling you,” Lotrič says.
Good news from Yorhart: This user’s needs and capabilities are being met perfectly.
It’s working: In addition to the 2024 Apple Design Award for Social Impact, Gentler Streak was named 2022 Apple Watch App of the Year. What’s more, it has an award-winning ancestor: Lotrič and Orešnik won an Apple Design Award in 2017 for Lake: Coloring Book for Adults.
The trio used the success of Lake to learn more about navigating the industry. But something else was happening during that time: The team, all athletes, began revisiting their own relationships with fitness. Lotrič suffered an injury that kept her from running for months and affected her mental health; she writes about her experiences in Gentler Streak’s editorial section. Mihelič had a different issue. “My problem wasn’t that I lacked motivation,” he says. “It was that I worked out too much. I needed something that let me know when it was enough.”
Statistics are just numbers. Without knowing how to interpret them, they are meaningless.
Katarina Lotrič, CEO and cofounder of Gentler Stories
As a way to reset, Mihelič put together an internal app, a simple utility that encouraged him to move but also allowed time for recuperation. “It wasn’t very gentle,” he laughs. “But the core idea was more or less the same. It guided but it didn’t push. And it wasn’t based on numbers; it was more explanatory.”
Over time, the group began using Mihelič’s app. “We saw right away that it was sticky,” says Lotrič. “I came back to it daily, and it was just this basic prototype. After a while, we realized, ‘Well, this works and is built, to an extent. Why don’t we see if there’s anything here?’”
Gentler Streak pulls workout information from HealthKit and presents it in simple, easy-to-understand charts.
That’s when Lotrič, Orešnik, and Krmelj split from Lake to create Gentler Stories with Mihelič. "I wanted in because I loved the idea behind the whole company,” Krmelj says. “It wasn’t just about the app. I really like the app. But I really believed in this idea about mental well-being.”
Early users believed it too: The team found that initial TestFlight audience members returned at a stronger rate than expected. “Our open and return rates were high enough that we kept thinking, “Are these numbers even real?’” laughs Lotrič. The team found that those early users responded strongly to the “gentler” side, the approachable repositioning of statistics.
“We weren’t primarily addressing the audience that most fitness apps seemed to target,” says Lotrič. “We focused on everyone else, the people who maybe didn’t feel like they belonged in a gym. Statistics are just numbers. Without knowing how to interpret them, they are meaningless. We wanted to change that and focus on the humanity.” By fall of 2021, Gentler Streak was ready for prime time.
Gentler Streak on Apple Watch brings encouragement closer than ever before.
Today’s version of the app follows the same strategy of Mihelič’s original prototype. Built largely in UIKit, its health data is smartly organized, the design is friendly and consistent, and features like its Monthly Summary view — which shows how you’re doing in relation to your history — focus less on comparison and more on progress, whatever that may mean. “If a 15-minute walk is what your body can do at that moment, that’s great,” Lotrič says. “That how we make people feel represented.”
The app’s social impact continues to grow. In the spring of 2024, Gentler Streak added support for Japanese, Korean, and traditional and simplified Chinese languages; previous updates added support for French, German, Italian, Spanish, and Brazilian Portuguese.
And those crucial features — fitness tracking, workout suggestions, metrics, and activity recaps — will remain available to everyone. “That goes with the Gentler Stories philosophy,” says Lotrič. “We’re bootstrapped, but at the same time we know that not everyone is in a position to support us. We still want to be a tool that helps people stay healthy not just for the first two weeks of the year or the summer, but all year long.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Alternative payment options in the EU in visionOS 1.2
Alternative payment options are now supported starting in visionOS 1.2 for apps distributed on the App Store in the EU.
Changes for apps in the EU now available in iPadOS 18 beta 2
The changes for apps in the European Union (EU), currently available to iOS users in the 27 EU member countries, can now be tested in iPadOS 18 beta 2 with Xcode 16 beta 2.
Also, the Web Browser Engine Entitlement Addendum for Apps in the EU and Embedded Browser Engine Entitlement Addendum for Apps in the EU now include iPadOS. If you’ve already entered into either of these addendums, be sure to sign the updated terms.
Learn more about the recent changes:
The App Store on Apple Vision Pro expands to new markets
Apple Vision Pro will launch in China mainland, Hong Kong, Japan, and Singapore on June 28 and in Australia, Canada, France, Germany, and the United Kingdom on July 12. Your apps and games will be automatically available on the App Store in regions you’ve selected in App Store Connect.
If you’d like, you can:
- Manage the availability of your visionOS apps and compatible iPhone or iPad apps at any time.
- Request to have your app evaluated directly on Apple Vision Pro.
- Localize your product page metadata for local audiences.
You can also learn how to build native apps to fully take advantage of exciting visionOS features.
Upcoming regional age ratings in Australia and South Korea
Apple is committed to making sure that the App Store is a safe place for everyone — especially kids. Within the next few months, you’ll need to indicate in App Store Connect if your app includes loot boxes available for purchase. In addition, a regional age rating based on local laws will automatically appear on the product page of the apps listed below on the App Store in Australia and South Korea. No other action is needed. Regional age ratings appear in addition to Apple global age ratings.
Australia
A regional age rating is shown if Games is selected as the primary or secondary category in App Store Connect.
- 15+ regional age rating: Games with loot boxes available for purchase.
- 18+ regional age rating: Games with Frequent/Intense instances of Simulated Gambling indicated in App Store Connect.
South Korea
A regional age rating is shown if either Games or Entertainment is selected as the primary or secondary category in App Store Connect, or if the app has Frequent/Intense instances of Simulated Gambling in any category.
- KR-All regional age rating: Apps and games with an Apple global age rating of 4+ or 9+.
- KR-12 regional age rating: Apps and games with an Apple global age rating of 12+. Certain apps and games in this group may receive a KR-15 regional age rating from the South Korean Games Ratings and Administration Committee (GRAC). If this happens, App Review will reach out to impacted developers.
- Certain apps and games may receive a KR-19 regional age rating from the GRAC. Instead of a pictogram, text will indicate this rating.
WWDC24 resources and survey
Thank you to everyone who joined us for an amazing week. We hope you found value, connection, and fun. You can continue to:
- Watch sessions at any time.
- Check out session highlights.
- Read about newly announced technologies.
- Get sample code from sessions.
- Dive into new and updated documentation.
We’d love to know what you thought of this year’s conference. If you’d like to tell us about your experience, please complete the WWDC24 survey.
WWDC24 highlights
Browse the biggest moments from an incredible week of sessions.
Machine Learning & AI Explore machine learning on Apple platforms Watch now Bring expression to your app with Genmoji Watch now Get started with Writing Tools Watch now Bring your app to Siri Watch now Design App Intents for system experiences Watch now Swift What’s new in Swift Watch now Meet Swift Testing Watch now Migrate your app to Swift 6 Watch now Go small with Embedded Swift Watch now SwiftUI & UI Frameworks What’s new in SwiftUI Watch now SwiftUI essentials Watch now Enhance your UI animations and transitions Watch now Evolve your document launch experience Watch now Squeeze the most out of Apple Pencil Watch now Developer Tools What’s new in Xcode 16 Watch now Extend your Xcode Cloud workflows Watch now Spatial Computing Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Explore game input in visionOS Watch now Bring your iOS or iPadOS game to visionOS Watch now Create custom hover effects in visionOS Watch now Work with windows in SwiftUI Watch now Dive deep into volumes and immersive spaces Watch now Customize spatial Persona templates in SharePlay Watch now Design Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Design App Intents for system experiences Watch now Design Live Activities for Apple Watch Watch now Say hello to the next generation of CarPlay design system Watch now Add personality to your app through UX writing Watch now Graphics & Games Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Bring your iOS or iPadOS game to visionOS Watch now Meet TabletopKit for visionOS Watch now App Store Distribution and Marketing What’s new in StoreKit and In-App Purchase Watch now What’s new in App Store Connect Watch now Implement App Store Offers Watch now Privacy & Security Streamline sign-in with passkey upgrades and credential managers Watch now What’s new in privacy Watch now App and System Services Meet the Contact Access Button Watch now Use CloudKit Console to monitor and optimize database activity Watch now Extend your app’s controls across the system Watch now Safari & Web Optimize for the spatial web Watch now Build immersive web experiences with WebXR Watch now Accessibility & Inclusion Catch up on accessibility in SwiftUI Watch now Get started with Dynamic Type Watch now Build multilingual-ready apps Watch now Photos & Camera Build a great Lock Screen camera capture experience Watch now Build compelling spatial photo and video experiences Watch now Keep colors consistent across captures Watch now Use HDR for dynamic image experiences in your app Watch now Audio & Video Enhance the immersion of media viewing in custom environments Watch now Explore multiview video playback in visionOS Watch now Build compelling spatial photo and video experiences Watch now Business & Education Introducing enterprise APIs for visionOS Watch now What’s new in device management Watch now Health & Fitness Explore wellbeing APIs in HealthKit Watch now Build custom swimming workouts with WorkoutKit Watch now Get started with HealthKit in visionOS Watch nowToday @ WWDC24: Day 5
Revisit the biggest moments from WWDC24
Explore the highlights.
WWDC24 highlights View now Catch WWDC24 recaps around the worldJoin us for special in-person activities at Apple locations worldwide this summer.
Explore apps and games from the KeynoteCheck out all the incredible featured titles.
How’d we do?We’d love to know your thoughts about this year’s conference.
Today’s WWDC24 playlist: Power UpGet ready for one last day.
And that’s a wrap!Thanks for being part of another incredible WWDC. It’s been a fantastic week of celebrating, connecting, and exploring, and we appreciate the opportunity to share it all with you.
Today @ WWDC24: Day 4
Plan for platforms
Find out what’s new across Apple platforms.
Design great visionOS apps Watch now Bring your iOS or iPadOS game to visionOS Watch now Design App Intents for system experiences Watch now Explore all platforms sessions GuidesSessions, labs, documentation, and sample code — all in one place.
WWDC24 iOS & iPadOS guide View now WWDC24 Games guide View now WWDC24 visionOS guide View now WWDC24 watchOS guide View now Today’s WWDC24 playlist: Coffee ShopComfy acoustic sounds for quieter moments.
One more to goWhat a week! But we’re not done yet — we’ll be back tomorrow for a big Friday. #WWDC24
Today @ WWDC24: Day 3
All Swift, all day
Explore new Swift and SwiftUI sessions.
What’s new in Swift Watch now What’s new in SwiftUI Watch now Meet Swift Testing Watch now Explore all Swift sessions GuidesSessions, labs, documentation, and sample code — all in one place.
WWDC24 Swift guide View now WWDC24 Developer Tools guide View now WWDC24 SwiftUI & UI Frameworks guide View now Go further with SwiftConnect with Apple experts and the worldwide developer community.
- Request a consultation in the WWDC labs.
- Explore the Apple Developer Forums.
- Connect with developers all over the world.
Cutting-edge sounds from the global frontiers of jazz.
More to comeThanks for being a part of #WWDC24. We’ll be back tomorrow with even more.
Today @ WWDC24: Day 2
Watch the Platforms State of the Union 5-minute recap
Explore everything announced at WWDC24 >
Introducing Apple IntelligenceGet smarter.
Explore machine learning on Apple platforms Watch now Get started with Writing Tools Watch now Bring your app to Siri Watch now Explore all Machine Learning and AI sessions GuidesSessions, labs, documentation, and sample code — all in one place.
WWDC24 Machine Learning & AI guide View now WWDC24 Design guide View now Go further with Apple Intelligence- Request a consultation in the WWDC labs.
- Explore the Apple Developer Forums.
- Connect with developers all over the world.
Summer sounds to change your latitude.
More tomorrowThanks for being a part of this incredible week. We’ll catch you tomorrow for another big day of technology and creativity. #WWDC24
Find out what’s new and download beta releases
Discover the latest advancements across Apple platforms, including the all-new Apple Intelligence, that can help you create even more powerful, intuitive, and unique experiences.
To start exploring and building with the latest features, download beta versions of Xcode 16, iOS 18, iPadOS 18, macOS 15, tvOS 18, visionOS 2, and watchOS 11.
Explore new documentation and sample code from WWDC24
Browse new and updated documentation and sample code to learn about the latest technologies, frameworks, and APIs introduced at WWDC24.
WWDC24 Design guide
WWDC24 GUIDE Design
Discover how this year’s design announcements can help make your app shine on Apple platforms.
Whether you’re refining your design, building for visionOS, or starting from scratch, this year’s design sessions can take your app to the next level on Apple platforms. Find out what makes a great visionOS app, and learn how to design interactive experiences for the spatial canvas. Dive into creating advanced games for Apple devices, explore the latest SF Symbols, learn how to add personality to your app through writing, and much more.
Get the highlights
Download the design one-sheet.
DownloadVIDEOS
Explore the latest video sessions Design great visionOS apps Watch now Design advanced games for Apple platforms Watch now Create custom environments for your immersive apps in visionOS Watch now Explore game input in visionOS Watch now Design Live Activities for Apple Watch Watch now What’s new in SF Symbols 6 Watch now Design interactive experiences for visionOS Watch now Design App Intents for system experiences Watch now Build multilingual-ready apps Watch now Add personality to your app through UX writing Watch now Get started with Dynamic Type Watch now Create custom visual effects with SwiftUI Watch nowFORUMS
Find answers and get adviceAsk questions and get advice about design topics on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of developer activities all over the world during and after WWDC.
RESOURCES
Explore the latest resources- Get the latest Apple Design Resources kits and templates.
- Explore the latest SF Symbols.
- Download the fonts you need to design interfaces for your apps on Apple platforms.
- Find out all that’s new in the HIG.
- Designing for games: Explore an all-new way to start creating games that feel comfortable and intuitive on Apple platforms.
- Tab bars: iPadOS apps now give people the option to switch between a tab bar or sidebar when navigating their app. Plus, items in the tab bar can now be customized.
- App icons: Learn how people can customize their Home Screens to show dark and tinted icons.
- Controls: Discover how people can quickly and easily perform actions from your app from Control Center, the Lock Screen, and the Action button.
- Widgets: Learn how to tint widgets when a person has customized their Home Screen to show dark and tinted icons.
- Windows: Learn how to use volumes in visionOS to display 2D or 3D content that people can view from any angle.
- Live Activities: Craft Live Activities that look and feel at home in the Smart Stack in watchOS.
- Immersive experiences: Explore the latest guidance on immersion, including design environments and virtual hands.
- Game controls: Learn how to design touch controls for games on iOS and iPadOS.
WWDC24 Swift guide
WWDC24 GUIDE Swift
Your guide to everything new in Swift, related tools, and supporting frameworks.
From expanded support across platforms and community resources, to an optional language mode with an emphasis on data-race safety, this year’s Swift updates meet you where you are. Explore this year’s video sessions to discover everything that’s new in Swift 6, find tools that support migrating to the new language mode at your own pace, learn about new frameworks that support developing with Swift, and much more.
Get the highlights
Download the Swift one-sheet.
DownloadVIDEOS
Explore the latest video sessions What’s new in Swift Watch now What’s new in SwiftData Watch now Migrate your app to Swift 6 Watch now Go small with Embedded Swift Watch now A Swift Tour: Explore Swift’s features and design Watch now Create a custom data store with SwiftData Watch now Explore the Swift on Server ecosystem Watch now Explore Swift performance Watch now Consume noncopyable types in Swift Watch now Track model changes with SwiftData history Watch nowFORUMS
Find answers and get adviceFind support from Apple experts and the developer community on the Apple Developer Forums, and check out the Swift Forums on swift.org.
Explore Swift on the Apple Developer Forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into Apple Developer documentation- Discover new and updated Swift documentation
- Explore the Swift Standard Library
- Learn how to migrate your code to Swift 6
- Reference the Swift programming language guide
- Read A Swift Tour: An overview of the features and syntax of Swift
- Explore the new Swift-dedicated GitHub organization
- Learn more about the Swift Package Manager (SwiftPM)
WWDC24 SwiftUI & UI Frameworks guide
WWDC24 GUIDE SwiftUI & UI Frameworks
Design and build your apps like never before.
With enhancements to live previews in Xcode, new customization options for animations and styling, and updates to interoperability with UIKit and AppKit views, SwiftUI is the best way to build apps for Apple platforms. Dive into the latest sessions to discover everything new in SwiftUI, UIKit, AppKit, and more. Make your app stand out with more options for custom visual effects and enhanced animations. And explore sessions that cover the essentials of building apps with SwiftUI.
Get the highlights
Download the SwiftUI one-sheet.
DownloadVIDEOS
Explore the latest video sessions What’s new in SwiftUI Watch now What’s new in AppKit Watch now What’s new in UIKit Watch now SwiftUI essentials Watch now What’s new in watchOS 11 Watch now Swift Charts: Vectorized and function plots Watch now Elevate your tab and sidebar experience in iPadOS Watch now Bring expression to your app with Genmoji Watch now Squeeze the most out of Apple Pencil Watch now Catch up on accessibility in SwiftUI Watch now Migrate your TVML app to SwiftUI Watch now Get started with Writing Tools Watch now Dive deep into volumes and immersive spaces Watch now Work with windows in SwiftUI Watch now Enhance your UI animations and transitions Watch now Evolve your document launch experience Watch now Build multilingual-ready apps Watch now Create custom hover effects in visionOS Watch now Tailor macOS windows with SwiftUI Watch now Demystify SwiftUI containers Watch now Support semantic search with Core Spotlight Watch now Create custom visual effects with SwiftUI Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about SwiftUI & UI frameworks
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentation- Level up the accessibility of your SwiftUI app.
- Interact with nearby points of interest.
- Build a document-based app with SwiftUI.
- Customize window styles and state-restoration behavior in macOS.
- Enhance your app’s content with tab navigation.
- Create visual effects with SwiftUI.
- Unify your app’s animations.
- Find all of this year’s SwiftUI, AppKit, and UIKit updates.
- Explore updates in the Human Interface Guidelines (HIG).
Today @ WWDC24: Day 1
It all starts here
Keynote
The exciting reveal of the latest Apple software and technologies. 10 a.m. PT.
Keynote Watch nowPlatforms State of the Union
The newest advancements on Apple platforms. 1 p.m. PT.
Platforms State of the Union Watch nowWhere to watch
- Apple Developer app and website
- Apple Developer YouTube channel
The full lineup of sessions arrives after the Keynote. And you can start exploring the first batch right after the Platforms State of the Union.
What to do at WWDC24The Keynote is only the beginning. Explore the first day of activities.
- Request your spot in the labs after the Keynote.
- Explore the Apple Developer Forums.
- Connect with developers all over the world.
The Apple Design Awards recognize unique achievements in app and game design — and provide a moment to step back and celebrate the innovations of the Apple developer community.
More to comeThanks for reading and get some rest! We’ll be back tomorrow for a very busy Day 2. #WWDC24
WWDC24 Developer Tools guide
WWDC24 GUIDE Developer Tools
Explore a wave of updates to developer tools that make building apps and games easier and more efficient than ever.
Watch the latest video sessions to explore a redesigned code completion experience in Xcode 16, and say hello to Swift Assist — a companion for all your coding tasks. Level up your code with the help of Swift Testing, the new, easy-to-learn framework that leverages Swift features to help enhance your testing experience. Dive deep into debugging, updates to Xcode Cloud, and more.
Get the highlights
Download the developer tools one-sheet.
DownloadVIDEOS
Explore the latest video sessions Meet Swift Testing Watch now What’s new in Xcode 16 Watch now Go further with Swift Testing Watch now Xcode essentials Watch now Run, Break, Inspect: Explore effective debugging in LLDB Watch now Break into the RealityKit debugger Watch now Demystify explicitly built modules Watch now Extend your Xcode Cloud workflows Watch now Analyze heap memory Watch nowFORUMS
Find answers and get adviceFind support from Apple experts and the developer community on the Apple Developer Forums.
Explore developer tools on the forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentationExpand your tool belt with new and updated articles and documentation.
- Explore updates in Xcode 16
- Discover Swift Testing
- Migrate a test from XCTest
- Define test functions
- Organize test functions with suite types
- Implement parameterized tests
- Enable and disable tests
- Limit the running time of tests
- Add tags to tests
- Add comments to tests
- Associate bugs with tests
- Interpret bug identifiers
WWDC24 iOS & iPadOS guide
WWDC24 GUIDE iOS & iPadOS
Your guide to all the new features and tools for building apps for iPhone and iPad.
Learn how to create more customized and intelligent apps that appear in more places across the system with the latest Apple technologies. And with Apple Intelligence, you can bring personal intelligence into your apps to deliver new capabilities — all with great performance and built-in privacy. Explore new video sessions about controls, Live Activities, App Intents, and more.
Get the highlights
Download the iOS & iPadOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Bring your app to Siri Watch now Discover RealityKit APIs for iOS, macOS, and visionOS Watch now Explore machine learning on Apple platforms Watch now Elevate your tab and sidebar experience in iPadOS Watch now Extend your app’s controls across the system Watch now Streamline sign-in with passkey upgrades and credential managers Watch now What’s new in App Intents Watch now Squeeze the most out of Apple Pencil Watch now Meet FinanceKit Watch now Bring your iOS or iPadOS game to visionOS Watch now Build a great Lock Screen camera capture experience Watch now Design App Intents for system experiences Watch now Bring your app’s core features to users with App Intents Watch now Broadcast updates to your Live Activities Watch now Unlock the power of places with MapKit Watch now Implement App Store Offers Watch now What’s new in Wallet and Apple Pay Watch now Meet the Contact Access Button Watch now What’s new in device management Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code Dive into documentation- Discover WidgetKit for controls.
- Find out how to set up broadcast push notifications, send channel management requests to APNs, and send broadcast push notification requests to APNs.
- Check out the new LockedCameraCapture, Media Accessibility, AccessorySetupKit, and Contact Provider frameworks.
- Explore object tracking with ARKit.
- Learn how to elevate your iPad app with the tab sidebar, as well as with a floating tab bar and integrated sidebar, using SwiftUI or UIkit.
WWDC24 Machine Learning & AI guide
WWDC24 GUIDE Machine Learning & AI
Bring personal intelligence to your apps.
Apple Intelligence brings powerful, intuitive, and integrated personal intelligence to Apple platforms — designed with privacy from the ground up. And enhancements to our machine learning frameworks let you run and train your machine learning and artificial intelligence models on Apple devices like never before.
Get the highlights
Download the Machine Learning & AI one-sheet.
DownloadVIDEOS
Explore the latest video sessionsGet the most out of Apple Intelligence by diving into sessions that cover updates to Siri integration and App Intents, and how to support Writing Tools and Genmoji in your app. And learn how to bring machine learning and AI directly into your apps using our machine learning frameworks.
Explore machine learning on Apple platforms Watch now Bring your app to Siri Watch now Bring your app’s core features to users with App Intents Watch now Bring your machine learning and AI models to Apple silicon Watch now Get started with Writing Tools Watch now Deploy machine learning and AI models on-device with Core ML Watch now Support real-time ML inference on the CPU Watch now Bring expression to your app with Genmoji Watch now What’s new in App Intents Watch now What’s new in Create ML Watch now Design App Intents for system experiences Watch now Discover Swift enhancements in the Vision framework Watch now Meet the Translation API Watch now Accelerate machine learning with Metal Watch now Train your machine learning and AI models on Apple GPUs Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
Dive into Machine learning and AI on the forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentation- Build a search interface for your app.
- Bring Writing Tools to your app with
UITextView
for UIKit andNSTextView
for AppKit. - Add expression to your app with Genmoji using
NSAdaptiveImageGlyph
in UIKit and AppKit. - Integrate machine learning models into your app using Core ML.
- Create your own machine learning models using Create ML.
- Discover all of the latest Core ML updates.
- Find out what’s new in the Vision framework.
WWDC24 Games guide
WWDC24 GUIDE Games
Create the next generation of games for millions of players worldwide.
Learn how to create cutting-edge gaming experiences across a unified gaming platform built with tightly integrated graphics software and a scalable hardware architecture. Explore new video sessions about gaming in visionOS, game input, the Game Porting Toolkit 2, and more.
Get the highlights
Download the games one-sheet.
DownloadVIDEOS
Explore the latest video sessions Render Metal with passthrough in visionOS Watch now Meet TabletopKit for visionOS Watch now Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Explore game input in visionOS Watch now Bring your iOS or iPadOS game to visionOS Watch now Accelerate machine learning with Metal Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code Dive into documentation- Check out updated design guidance for games.
- Easily bring your game to Apple platforms using the Game Porting Toolkit 2.
- Meet the new TabletopKit framework.
- Learn how to play sound from a location in a 3D scene.
- Learn how to manage your game window for Metal in macOS.
- Get details on adapting your game interface for smaller screens.
- Discover how to improve your game’s graphics performance and settings.
- Find out how to improve the player experience for games with large downloads.
- Explore adding touch controls to games that support game controllers.
WWDC24 watchOS guide
WWDC24 GUIDE watchOS
Your guide to all the new features and tools for building apps for Apple Watch.
Learn how to take advantage of the increased intelligence and capabilities of the Smart Stack. Explore new video sessions about relevancy cues, interactivity, Live Activities, and double tap.
Get the highlights
Download the watchOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions What’s new in watchOS 11 Watch now Bring your Live Activity to Apple Watch Watch now What’s new in SwiftUI Watch now SwiftUI essentials Watch now Design Live Activities for Apple Watch Watch now Catch up on accessibility in SwiftUI Watch now Build custom swimming workouts with WorkoutKit Watch now Demystify SwiftUI containers Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about watchOS
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentation- Discover double tap.
- Learn how to use the latest technologies to build apps for Apple Watch.
- Get updated guidance on design for Apple Watch.
- Visit the Apple Watch site.
WWDC24 sessions schedule, lab requests, guides, and documentation now available
WWDC24 is here! Here’s how to make the most of your week:
- Watch daily sessions.
- Request one-on-one online lab appointments with Apple experts.
- Check out curated guides to the week’s biggest announcements.
- Dive into new and updated documentation.
WWDC24 visionOS guide
WWDC24 GUIDE visionOS
The infinite canvas is waiting for you.
In this year’s sessions, you’ll get an overview of great visionOS app design, explore object tracking, and discover new RealityKit APIs. You’ll also find out how to build compelling spatial photo and video experiences, explore enterprise APIs for visionOS, find out how to render Metal with passthrough, and much more.
Get the highlights
Download the visionOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Design great visionOS apps Watch now Explore object tracking for visionOS Watch now Compose interactive 3D content in Reality Composer Pro Watch now Discover RealityKit APIs for iOS, macOS, and visionOS Watch now Create enhanced spatial computing experiences with ARKit Watch now Enhance your spatial computing app with RealityKit audio Watch now Build compelling spatial photo and video experiences Watch now Meet TabletopKit for visionOS Watch now Render Metal with passthrough in visionOS Watch now Explore multiview video playback in visionOS Watch now Introducing enterprise APIs for visionOS Watch now Dive deep into volumes and immersive spaces Watch now Build a spatial drawing app with RealityKit Watch now Optimize for the spatial web Watch now Explore game input in visionOS Watch now Create custom environments for your immersive apps in visionOS Watch now Enhance the immersion of media viewing in custom environments Watch now Design interactive experiences for visionOS Watch now Create custom hover effects in visionOS Watch now Optimize your 3D assets for spatial computing Watch now Discover area mode for Object Capture Watch now Bring your iOS or iPadOS game to visionOS Watch now Build immersive web experiences with WebXR Watch now Get started with HealthKit in visionOS Watch now What’s new in Quick Look for visionOS Watch now What’s new in USD and MaterialX Watch now Customize spatial Persona templates in SharePlay Watch now Create enhanced spatial computing experiences with ARKit Watch now Break into the RealityKit debugger Watch now What’s new in SwiftUI Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about visionOS
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code- BOT-anist: Discover how the RealityKit debugger lets you inspect the entity hierarchy of spatial apps, debug rogue transformation, detect bad behavior, and find missing entities.
- Destination Video: Leverage 3D video and Spatial Audio to deliver an immersive experience.
- Incorporating real-world surroundings in an immersive experience: Make your app’s content respond to the local shape of the world.
- Simulating particles in your visionOS app: Add a range of visual effects to a RealityKit view by attaching a particle emitter component to an entity.
- Simulating physics with collisions in your visionOS app: Create entities that behave and react like physical objects in a RealityKit view.
- Discover new visionOS content in the HIG.
- Creating your first visionOS app: Learn new tips for building a new visionOS app using SwiftUI and platform-specific features.
- Adding 3D content to your app: Explore the latest in adding depth and dimension to your visionOS app.
- Understanding RealityKit’s modular architecture: Learn how everything fits together in RealityKit.
- Designing RealityKit content with Reality Composer Pro: Discover updates that can help you quickly create RealityKit scenes for your visionOS app.
- Presenting windows and spaces: Find out how to open and close the scenes that make up your app’s interface.
Updated agreements and guidelines now available
The App Review Guidelines, Apple Developer Program License Agreement, and Apple Developer Agreement have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms as needed.
App Review Guidelines- 2.1(a): Added to Notarization.
- 2.1(b): Added requirement to explain why configured in-app items cannot be found or reviewed in your app to your review notes.
- 2.5.8: We will no longer reject apps that simulate multi-app widget experiences.
- 4.6: This guideline has been removed.
- Sections 1, 6(B): Updated “Apple ID” to “Apple Account.”
- Section 16(A): Clarified export compliance requirements.
- Section 18: Updated terminology for government end users.
- Definitions, Section 2.1, 3.3.6(C), 3.3.10(A), 14.2(C), Attachment 9, Schedules 1-3: Updated “Apple ID” to “Apple Account.”
- Definitions: Clarified definition of Apple Maps Service.
- Definitions, Section 3.3.6(F): Specified requirements for using the Apple Music Feed API.
- Definitions, Section 3.3.8(F): Added terms for use of the Now Playing API.
- Section 3.2(h): Added terms for use of Apple Software and Services.
- Section 6.5: Added terms for use of TestFlight.
- Section 7.7: Added terms on customization of icons.
- Section 11.2(f), 14.8(A): Clarified export compliance requirements.
- Section 14.9: Updated terminology for government end users.
- Attachment 5, Section 3.1: Added terms for use of Wallet pass templates.
Please sign in to your account to review and accept the updated terms.
View all agreements and guidelines
Translations of the terms will be available on the Apple Developer website within one month.
Hello Developer: June 2024
With WWDC24 just days away, there’s a lot of ground to cover, so let’s get right to it.
WWDC24
Introducing the 2024 Apple Design Award winnersInnovation. Ingenuity. Inspiration.
WWDC24: Everything you need to knowFrom the Keynote to the last session drop, here are the details for an incredible week of sessions, labs, community activities, and more.
Download the Apple Developer app >
Subscribe to Apple Developer on YouTube >
Watch the KeynoteDon’t miss the exciting reveal of the latest Apple software and technologies at 10 a.m. PT on Monday, June 10.
Watch the Platforms State of the UnionHere’s your deep dive into the newest advancements on Apple platforms. Join us at 1 p.m. PT on Monday, June 10.
Get ready for sessionsLearn something new in video sessions posted to the Apple Developer app, website, and YouTube channel. The full schedule drops after the Keynote on Monday, June 10.
Prepare for labsHere’s everything you need to know to get ready for online labs.
Find answers on the forumsDiscuss the conference’s biggest moments on the Apple Developer Forums.
Get the most out of the forums >
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
Explore community activities >
Say hello to the first WWDC24 playlistThe official WWDC24 playlists drop right after the Keynote. Until then, here’s a teaser playlist to get you excited for the week.
Coming up: One incredible weekHave a great weekend, and we’ll catch you on Monday. #WWDC24
Watch the WWDC24 Keynote
WWDC24
Tune in at 10 a.m. PT on June 10 to catch the exciting reveal of the latest Apple software and technologies.
Keynote Watch now Keynote (ASL) Watch nowWatch the WWDC24 Platforms State of the Union
WWDC24
Tune in at 1 p.m. PT on June 10 to dive deep into the newest advancements on Apple platforms.
Platforms State of the Union Watch now Platforms State of the Union (ASL) Watch nowPrice and tax updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Price updatesOn June 21, pricing for apps and In-App Purchases¹ will be updated for the Egypt, Ivory Coast, Nepal, Nigeria, Suriname, and Zambia storefronts if you haven’t selected one of these as the base for your app or In‑App Purchase.¹ These updates also consider the following value‑added tax (VAT) changes:
- Ivory Coast: VAT introduction of 18%
- Nepal: VAT introduction of 13% and digital services tax of 2%
- Suriname: VAT introduction of 10%
- Zambia: VAT introduction of 16%
Prices won’t change on the Egypt, Ivory Coast, Nepal, Nigeria, Suriname, or Zambia storefront if you’ve selected that storefront as the base for your app or In-App Purchase.¹ Prices on other storefronts will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your pricesView or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by region
Set a price for an In-App Purchase
Tax updatesYour proceeds for sales of apps and In-App Purchases will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Ivory Coast, Nepal, Suriname, and Zambia.
As of today, June 6, your proceeds from the sale of eligible apps and In‑App Purchases have been modified in the following countries to reflect introductions of or changes in tax rates.
- France: Digital services tax no longer applicable
- Ivory Coast: VAT introduction of 18%
- Malaysia: Sales and Service Tax (SST) increased to 8% from 6%
- Nepal: VAT introduction of 13% and digital services tax introduction of 2%
- Norway: VAT increased to 20% from 0% for certain Norwegian news publications
- Suriname: VAT introduction of 10%
- Uganda: Digital services tax introduction of 5%
- Zambia: VAT introduction of 16%
The Fitness and Health category has a new attribute: “Content is primarily accessed through streaming”. If this is relevant to your apps or In-App Purchases that offer fitness video streaming, review and update your selections in the Pricing and Availability section of Apps in App Store Connect.
Learn about setting tax categories
1: Excludes auto-renewable subscriptions.
Introducing the 2024 Apple Design Award winners
Every year, the Apple Design Awards recognize innovation, ingenuity, and technical achievement in app and game design.
The incredible developers behind this year’s finalists have shown what can be possible on Apple platforms — and helped lay the foundation for what’s to come.
We’re thrilled to present the winners of the 2024 Apple Design Awards.
Action packed.
One week to go. Don’t miss the exciting reveal of the latest Apple software and technologies.
Keynote kicks off at 10 a.m. PT on June 10.
Join us for the Platforms State of the Union at 1 p.m. PT on June 10.
Introducing the 2024 Apple Design Award finalists
Every year, the Apple Design Awards recognize innovation, ingenuity, and technical achievement in app and game design.
But they’ve also become something more: A moment to step back and celebrate the Apple developer community in all its many forms.
Coming in swiftly.
Join the worldwide developer community for an incredible week of technology and creativity — all online and free. WWDC24 takes place from June 10-14.
Check out the new Apple Developer Forums
The Apple Developer Forums have been redesigned for WWDC24 to help developers connect with Apple experts, engineers, and each other to find answers and get advice.
Apple Developer Relations and Apple engineering are joining forces to field your questions and work to solve your technical issues. You’ll have access to an expanded knowledge base and enjoy quick response times — so you can get back to creating and enhancing your app or game. Plus, Apple Developer Program members now have priority access to expert advice on the forums.
Hello Developer: May 2024
It won’t be long now! WWDC24 takes place online from June 10 through 14, and we’re here to help you get ready for the biggest developer event of the year. In this edition:
- Explore Pathways, a brand-new way to learn about developing for Apple platforms.
- Meet three Distinguished Winners of this year’s Swift Student Challenge.
- Get great tips from the SharePlay team.
- Browse new developer activities about accessibility, machine learning, and more.
WWDC24
Introducing PathwaysIf you’re new to developing for Apple platforms, we’ve got an exciting announcement. Pathways are simple and easy-to-navigate collections of the videos, documentation, and resources you’ll need to start building great apps and games. Because Pathways are self-directed and can be followed at your own pace, they’re the perfect place to begin your journey.
Explore Pathways for Swift, SwiftUI, design, games, visionOS, App Store distribution, and getting started as an Apple developer.
Meet three Distinguished Winners of the Swift Student ChallengeElena Galluzzo, Dezmond Blair, and Jawaher Shaman all drew inspiration from their families to create their winning app playgrounds. Now, they share the hope that their apps can make an impact on others as well.
Meet Elena, Dezmond, and Jawaher >
MEET WITH APPLE EXPERTS
Check out the latest worldwide developer activities- Meet with App Review online to discuss the App Review Guidelines and explore best practices for a smooth review process. Sign up for May 14.
- Join us in Bengaluru for a special in-person activity to commemorate Global Accessibility Awareness Day. Sign up for May 15.
- Learn how Apple machine learning frameworks can help you create more intelligent apps and games in an online activity. Sign up for May 19.
Browse the full schedule of activities >
NEWS
Explore Apple Pencil ProBring even richer and more immersive interactions to your iPad app with new features, like squeeze gestures, haptic feedback, and barrel-roll angle tracking.
BEHIND THE DESIGN
The rise of Tide GuideHere’s the swell story of how fishing with his grandfather got Tucker MacDonald hooked into creating his tide-predicting app.
‘I taught myself’: Tucker MacDonald and the rise of Tide Guide View nowGROW YOUR BUSINESS
Explore simple, safe transactions with In-App PurchaseTake advantage of powerful global pricing tools, promotional features, analytics only available from Apple, built-in customer support, and fraud detection.
Q&A
Get shared insights from the SharePlay teamLearn about shared experiences, spatial Personas, that magic “shockwave” effect, and more.
Q&A with the SharePlay team View nowDOCUMENTATION
Browse new and updated docs- Explore the new framework for converting Pages, Numbers, and Keynote files to PDF, enabling you to show an inline preview in a web browser.
- Check out Writing ARM64 code for Apple platforms for an update on data-independent timing.
- Visit the HIG for new and enhanced guidance on virtual hands and interactive elements in visionOS, sheets in iPadOS, and more.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Q&A with the SharePlay team
SharePlay is all about creating meaningful shared experiences in your app. By taking advantage of SharePlay, your app can provide a real-time connection that synchronizes everything from media playback to 3D models to collaborative tools across iPhone, iPad, Mac, Apple TV, and Apple Vision Pro. We caught up with the SharePlay team to ask about creating great SharePlay experiences, spatial Personas, that magic “shockwave” effect, and more.
How does a person start a SharePlay experience?Anyone can begin a group activity by starting a FaceTime call and then launching a SharePlay-supported app. When they do, a notification about the group activity will appear on all participants’ screens. From there, participants can join — and come and go — as they like. You can also start a group activity from your app, from the share sheet, or by adding a SharePlay button to your app.
How can I use SharePlay to keep media playback in sync?SharePlay supports coordinated media playback using AVKit. You can use the system coordinator to synchronize your own player across multiple participants. If you have an ad-supported app, you can synchronize both playback and ad breaks. SharePlay also provides the GroupSessionMessenger API, which lets participants communicate in near-real time.
What’s the difference between SharePlay and Shared with You? Can they work together?SharePlay allows people to share rich experiences with each other. Shared with You helps make app content that people are sharing in Messages available to your app. For example, if a group chat is discussing a funny meme video from your app, adopting Shared with You would allow your app to highlight that content in the app. And if your app supports SharePlay, you can surface that relevant content as an option for watching together.
Separately, Shared with You offers ways to initiate collaboration on shared, persisted content (such as documents) over Messages and FaceTime. You can choose to support SharePlay on that collaborative content, but if you do, consider the ephemerality of a SharePlay experience compared to the persistence of collaboration. For example, if your document is a presentation, you may wish to leverage Shared with You to get editors into the space while using SharePlay to launch an interactive presentation mode that just isn’t possible with screen sharing alone.
What’s the easiest way for people to share content?When your app lets your system know that your current view has shareable content on screen, people who bring their devices together can seamlessly share that content — much like NameDrop, which presents a brief “shockwave” animation when they do. This method supports the discrete actions of sharing documents, initiating SharePlay, and starting a collaboration. This can also connect your content to the system share sheet and help you expose shareable content to the Share menu in visionOS.
Can someone on iPhone join a SharePlay session with someone on Apple Vision Pro?Yes! SharePlay is supported across iOS, iPadOS, macOS, tvOS, and visionOS. That means people can watch a show together on Apple TV+ and keep their playback synchronized across all platforms. To support a similar playback situation in your app, watch Coordinate media playback in Safari with Group Activities. If you’re looking to maintain your app’s visual consistency across platforms, check out the Group Session Messenger and DrawTogether sample project. Remember: SharePlay keeps things synchronized, but your UI is up to you.
How do I get started adopting spatial Personas with SharePlay in visionOS?When you add Group Activities to your app, people can share in that activity over FaceTime while appearing windowed — essentially the same SharePlay experience they’d see on other platforms. In visionOS, you have the ability to create a shared spatial experience using spatial Personas in which participants are placed according to a template. For example:
Using spatial Personas, the environment is kept consistent and participants can see each others’ facial expressions in real time.
How do I maintain visual and spatial consistency with all participants in visionOS?FaceTime in visionOS provides a shared spatial context by placing spatial Personas in a consistent way around your app. This is what we refer to as “visual consistency.” You can use SharePlay to maintain the same content in your app for all participants.
Can both a window and a volume be shared at the same time in a SharePlay session?No. Only one window or volume can be associated with a SharePlay session, but you can help the system choose the proper window or volume.
How many people can participate in a group activity?SharePlay supports 33 total participants, including yourself. Group activities on visionOS involving spatial Personas support five participants at a time.
Do iOS and iPadOS apps that are compatible with visionOS also support SharePlay in visionOS?Yes. During a FaceTime call, your app will appear in a window, and participants in the FaceTime call will appear next to it.
Learn more about SharePlay Design spatial SharePlay experiences Watch now Build spatial SharePlay experiences Watch now Share files with SharePlay Watch now Add SharePlay to your app Watch now‘I taught myself’: Tucker MacDonald and the rise of Tide Guide
Lots of apps have great origin stories, but the tale of Tucker MacDonald and Tide Guide seems tailor-made for the Hollywood treatment. It begins in the dawn hours on Cape Cod, where a school-age MacDonald first learned to fish with his grandfather.
“Every day, he’d look in the paper for the tide tables,” says MacDonald. “Then he’d call me up and say, ‘Alright Tucker, we’ve got a good tide and good weather. Let’s be at the dock by 5:30 a.m.’”
Rhapsody in blue: Tide Guide delivers Washington weather data in a gorgeous design and color scheme.
That was MacDonald’s first introduction to tides — and the spark behind Tide Guide, which delivers comprehensive forecasts through top-notch data visualizations, an impressive array of widgets, an expanded iPad layout, and Live Activities that look especially great in, appropriately enough, the Dynamic Island. The SwiftUI-built app also offers beautiful Apple Watch complications and a UI that can be easily customized, depending how deep you want to dive into its data. It’s a remarkable blend of original design and framework standards, perfect for plotting optimal times for a boat launch, research project, or picnic on the beach.
Impressively, Tide Guide was named a 2023 Apple Design Award finalist — no mean feat for a solo developer who had zero previous app-building experience and started his career as a freelance filmmaker.
“I wanted to be a Hollywood director since I was in the fifth grade,” says MacDonald. Early in his filmmaking career, MacDonald found himself in need of a tool that could help him pre-visualize different camera and lens combinations — “like a director’s viewfinder app,” he says. And while he caught a few decent options on the market, MacDonald wanted an app with iOS design language that felt more at home on his iPhone. “So I dove in, watched videos, and taught myself how to make it,” he says.
My primary use cases were going fishing, heading to the beach, or trying to catch a sunset.
Tucker MacDonald, Tide Guide
Before too long, MacDonald drifted away from filmmaking and into development, taking a job as a UI designer for a social app. “The app ended up failing, but the job taught me how a designer works with an engineer,” he says. “I also learned a lot about design best practices, because I had been creating apps that used crazy elements, non-standard navigation, stuff like that.”
Tucker MacDonald grew up fishing with his grandfather in the waters off Cape Cod.
Armed with growing design knowledge, he started thinking about those mornings with his grandfather, and how he might create something that could speed up the crucial process of finding optimal fishing conditions. And it didn’t need to be rocket science. “My primary use cases were going fishing, heading to the beach, or trying to catch a sunset,” he says. “I just needed to show current conditions.”
I’d say my designs were way prettier than the code I wrote.
Tucker MacDonald, Tide Guide
In the following years, Tide Guide grew in parallel with MacDonald’s self-taught skill set. “There was a lot of trial and error, and I’d say my designs were way prettier than the code I wrote,” he laughs. “But I learned both coding and design by reading documentation and asking questions in the developer community.”
Today’s Tide Guide is quite the upgrade from that initial version. MacDonald continues to target anyone heading to the ocean but includes powerful metrics — like an hour-by-hour 10-day forecast, water temperatures, and swell height — that advanced users can seek out as needed. The app’s palette is even designed to match the color of the sky throughout the day. “The more time you spend with it, the more you can dig into different layers,” he says.
All the information you need for a day on the water, in one place.
People around the world have dug into those layers, including an Alaskan tour company operator who can only land in a remote area when the tide is right, and a nonprofit national rescue service in Scotland, whose members weighed in with a Siri shortcut-related workflow request that MacDonald promptly included. And as Tide Guide gets bigger, MacDonald’s knowledge of developing — and oceanography — continues to swell. “I’m just happy that my passion for crafting an incredible experience comes through,” he says, “because I really do have so much fun making it.”
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
What’s new for apps distributed in the European Union
Core Technology Fee (CTF)
The CTF is an element of the alternative business terms in the EU that reflects the value Apple provides developers through tools, technologies, and services that enable them to build and share innovative apps. We believe anyone with a good idea and the ingenuity to bring it to life should have the opportunity to offer their app to the world. Only developers who reach significant scale (more than one million first annual installs per year in the EU) pay the CTF. Nonprofit organizations, government entities, and educational institutions approved for a fee waiver don’t pay the CTF. Today, we’re introducing two additional conditions in which the CTF is not required:
- First, no CTF is required if a developer has no revenue whatsoever. This includes creating a free app without monetization that is not related to revenue of any kind (physical, digital, advertising, or otherwise). This condition is intended to give students, hobbyists, and other non-commercial developers an opportunity to create a popular app without paying the CTF.
- Second, small developers (less than €10 million in global annual business revenue*) that adopt the alternative business terms receive a 3-year free on-ramp to the CTF to help them create innovative apps and rapidly grow their business. Within this 3-year period, if a small developer that hasn’t previously exceeded one million first annual installs crosses the threshold for the first time, they won’t pay the CTF, even if they continue to exceed one million first annual installs during that time. If a small developer grows to earn global revenue between €10 million and €50 million within the 3-year on-ramp period, they’ll start to pay the CTF after one million first annual installs up to a cap of €1 million per year.
This week, the European Commission designated iPadOS a gatekeeper platform under the Digital Markets Act. Apple will bring our recent iOS changes for apps in the European Union (EU) to iPadOS later this fall, as required. Developers can choose to adopt the Alternative Terms Addendum for Apps in the EU that will include these additional capabilities and options on iPadOS, or stay on Apple’s existing terms.
Once these changes are publicly available to users in the EU, the CTF will also apply to iPadOS apps downloaded through the App Store, Web Distribution, and/or alternative marketplaces. Users who install the same app on both iOS and iPadOS within a 12-month period will only generate one first annual install for that app. To help developers estimate any potential impact on their app businesses under the Alternative Terms Addendum for Apps in the EU, we’ve updated the App Install reports in App Store Connect that can be used with our fee calculator.
For more details, visit Understanding the Core Technology Fee for iOS apps in the European Union. If you’ve already entered into the Alternative Terms Addendum for Apps in the EU, be sure to sign the updated terms.
Global business revenue takes into account revenue across all commercial activity, including from associated corporate entities. For additional details, read the Alternative Terms Addendum for Apps in the EU.
Reminder: Privacy requirement for app submissions starts May 1
The App Store was created to be a safe place for users to discover and get millions of apps all around the world. Over the years, we‘ve built many critical privacy and security features that help protect users and give them transparency and control — from Privacy Nutrition Labels to app tracking transparency, and so many more.
An essential requirement of maintaining user trust is that developers are responsible for all of the code in their apps, including code frameworks and libraries from other sources. That‘s why we’ve created privacy manifests and signature requirements for the most popular third-party SDKs, as well as required reasons for covered APIs.
Starting May 1, 2024, new or updated apps that have a newly added third-party SDK that‘s on the list of commonly used third-party SDKs will need all of the following to be submitted in App Store Connect:
- Required reasons for each listed API
- Privacy manifests
- Valid signatures when the SDK is added as a binary dependency
Apps won’t be accepted if they fail to meet the manifest and signature requirements. Apps also won’t be accepted if all of the following apply:
- They’re missing a reason for a listed API
- The code is part of a dynamic framework embedded via the Embed Frameworks build phase
- The framework is a newly added third-party SDK that’s on the list of commonly used third-party SDKs
In the future, these required reason requirements will expand to include the entire app binary. If you’re not using an API for an approved reason, please find an alternative. These changes are designed to help you better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users.
This is a step forward for all apps and we encourage all SDKs to adopt this functionality to better support the apps that depend on them.
Q&A: Promoting your app or game with Apple Search Ads
Apple Search Ads helps you drive discovery of your app or game on the App Store. We caught up with the Apple Search Ads team to learn more about successfully using the service, including signing up for the free online Apple Search Ads Certification course.
How might my app or game benefit from promotion on the App Store?With Apple Search Ads, developers are seeing an increase in downloads, retention, return on ad spend, and more. Find out how the developers behind The Chefz, Tiket, and Petit BamBou have put the service into practice.
Where will my ad appear?You can reach people in the following places:
- The Today tab, where people start their App Store visit.
- The Search tab, before people search for something specific.
- Search results, at the top of the results list.
- Product pages, in the “You Might Also Like” section.
Online Apple Search Ads Certification training teaches proven best practices for driving stronger campaign performance. Certification training is designed for all skill levels, from marketing pros to those just starting out. To become certified, complete all of the Certification lessons (each takes between 10 and 20 minutes), then test your skills with a free exam. Once you’re certified, you can share your certificate with your professional network on platforms like LinkedIn.
Sign up here with your Apple ID.
Will my certification expire?Although your Apple Search Ads certification never expires, training is regularly updated. You can choose to be notified about these updates through email or web push notifications.
Can I highlight specific content or features in my ads?You can use the custom product pages you create in App Store Connect to tailor your ads for a specific audience, feature launch, seasonal promotion, and more. For instance, you can create an ad for the Today tab that leads people to a specific custom product page or create ad variations for different search queries. Certification includes a lesson on how to do so.
Can I advertise my app before launch?You can use Apple Search Ads to create ads for apps you’ve made available for pre-order. People can order your app before it’s released, and it’ll automatically download onto their devices on release day.
Apple Search Ads now available in Brazil and more Latin American markets
Drive discovery and downloads on the App Store with Apple Search Ads in 70 countries and regions, now including Brazil, Bolivia, Costa Rica, the Dominican Republic, El Salvador, Guatemala, Honduras, Panama, and Paraguay.
Visit the Apple Search Ads site and Q&A.
And explore best practices to improve your campaign performance with the free Apple Search Ads Certification course.
Let loose.
Watch the May 7 event at apple.com, on Apple TV, or on YouTube Live.
Check out our newest developer activities
Join us around the world to learn about growing your business, elevating your app design, and preparing for the App Review process. Here’s a sample of our new activities — and you can always browse the full schedule to find more.
- Expand your app to new markets: Learn how to bring your apps and games to Southeast Asia, Hong Kong, and Taiwan in new online sessions with App Store experts.
- Request a one-on-one App Review consultation: Meet online to discuss the App Review Guidelines and explore best practices for a smooth review process.
- Visit the Apple Vision Pro developer labs: Test, refine, and optimize your apps and games for the infinite canvas — with in-person help from Apple.
- Request a design or technology consultation: In this 30-minute online consultation, you’ll get expert advice tailored to your app or game.
Web Distribution now available in iOS 17.5 beta 2 and App Store Connect
Web Distribution lets authorized developers distribute their iOS apps to users in the European Union (EU) directly from a website owned by the developer. Apple will provide developers access to APIs that facilitate the distribution of their apps from the web, integrate with system functionality, and back up and restore users’ apps, once they meet certain requirements designed to help protect users and platform integrity. For details, visit Getting started with Web Distribution in the EU.
Get ready with the latest beta releases
The beta versions of iOS 17.5, iPadOS 17.5, macOS 14.5, tvOS 17.5, visionOS 1.2, and watchOS 10.5 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.3.
Updated App Review Guidelines now available
The App Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification. The following guidelines have been updated:
- 3.1.1(a): Updated to include Music Streaming Services Entitlements.
- 4.7: Added games from retro game console emulator apps to the list of permitted software, and clarifies that mini apps and mini games must be HTML5.
Hello Developer: April 2024
Welcome to Hello Developer — and the kickoff to WWDC season. In this edition:
- Discover what’s ahead at WWDC24 — and check out the new Apple Developer YouTube channel.
- Learn how the all-new Develop in Swift Tutorials can help jump-start a career in app development.
- Find out how Zach Gage and Jack Schlesinger rebooted the crossword puzzle with Knotwords.
WWDC24
The countdown is onWWDC season is officially here.
This year’s Worldwide Developers Conference takes place online from June 10 through 14, offering you the chance to explore the new tools, frameworks, and technologies that’ll help you create your best apps and games yet.
All week long, you can learn and refine new skills through video sessions, meet with Apple experts to advance your projects and ideas, and join the developer community for fun activities. It’s an innovative week of technology and creativity — all online at no cost.
And for the first time, WWDC video sessions will be available on YouTube, in addition to the Apple Developer app and website. Visit the new Apple Developer channel to subscribe and catch up on select sessions.
TUTORIALS
Check out the new Develop in Swift TutorialsKnow a student or aspiring developer looking to start their coding journey? Visit the all-new Develop in Swift Tutorials, designed to introduce Swift, SwiftUI, and spatial computing through the experience of building a project in Xcode.
BEHIND THE DESIGN
Gage and Schlesinger at the crossroadsLearn how acclaimed game designers Zach Gage and Jack Schlesinger reimagined the crossword with Knotwords.
Knotwords: Gage and Schlesinger at the crossroads View nowMEET WITH APPLE EXPERTS
Browse new developer activitiesCheck out this month’s sessions, labs, and consultations, held online and in person around the world.
NEWS AND DOCUMENTATION
Explore and create with new and updated docs- Check out two new sample code projects about creating and viewing stereo MV-HEVC movies: Converting side-by-side 3D video to multiview HEVC and Reading multiview 3D video files.
- Find out about creating distribution-signed code for macOS, and explore the details of packaging Mac software for distribution.
- Learn what’s new in the Human Interface Guidelines, including guidance on displaying virtual hands, organizing your spatial layouts, and using Activity rings in your app.
View the complete list of new resources.
Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Knotwords: Gage and Schlesinger at the crossroads
Knotwords is a clever twist on crossword puzzles — so much so that one would expect creators Zach Gage and Jack Schlesinger to be longtime crossword masters who set out to build themselves a new challenge.
One would be totally wrong.
“Crosswords never hit with me,” says Gage, with a laugh. “I dragged myself kicking and screaming into this one.”
It’s not about ‘What random box of words will you get?’ but, ‘What are the decisions you’ll make as a player?’
Jack Schlesinger, Knotwords
In fact, Gage and Schlesinger created the Apple Design Award finalist Knotwords — and the Apple Arcade version, Knotwords+ — not to revolutionize the humble crossword but to learn it. “We know people like crosswords,” says Schlesinger, “so we wanted to figure out what we were missing.” And the process didn’t just result in a new game — it led them straight to the secret of word-game design success. “It’s not about ‘What random box of words will you get?’” says Schlesinger, “but, ‘What are the decisions you’ll make as a player?’”
Knotwords challenges players to complete a puzzle using only specific letters in specific parts of the board.
Gage and Schlesinger are longtime design partners; in addition to designing Knotwords and Good Sudoku with Gage, Schlesinger contributed to the 2020 reboot of SpellTower and the Apple Arcade title Card of Darkness. Neither came to game design through traditional avenues: Gage has a background in interactive art, while Schlesinger is the coding mastermind with a history in theater and, of all things, rock operas. (He’s responsible for the note-perfect soundtracks for many of the duo’s games.) And they’re as likely to talk about the philosophy behind a game as the development of it.
I had been under the mistaken impression that the magic of a simple game was in its simple rule set. The magic actually comes from having an amazing algorithmic puzzle constructor.
Zach Gage
“When you’re playing a crossword, you’re fully focused on the clues. You’re not focused on the grid at all,” explains Gage. “But when you’re building a crossword, you’re always thinking about the grid. I wondered if there was a way to ask players not to solve a crossword but recreate the grid instead,” he says.
Knotwords lets players use only specific letters in specific sections of the grid — a good idea, but one that initially proved elusive to refine and difficult to scale. “At first, the idea really wasn’t coming together,” says Gage, “so we took a break and built Good Sudoku.” Building their take on sudoku — another game with simple rules and extraordinary complexity — proved critical to restarting Knotwords. “I had been under the mistaken impression that the magic of a simple game was in its simple rule set,” Gage says. “The magic actually comes from having an amazing algorithmic puzzle constructor.”
An early — and very analog — prototype of Knotwords.
Problematically, they didn’t just have one of those just lying around. But they did have Schlesinger. “I said, ‘I will make you a generator for Knotwords in two hours,’” Schlesinger laughs. That was maybe a little ambitious. The first version took eight hours and was, by his own account, not great. However, it proved a valuable learning experience. “We learned that we needed to model a player. What would someone do here? What steps could they take? If they make a mistake, how long would it take them to correct it?” In short, the puzzle generation algorithm needed to take into account not just rules, but also player behavior.
The work provided the duo an answer for why people liked crosswords. It also did one better by addressing one of Gage’s longstanding game-design philosophies. “To me, the only thing that’s fun in a game is the process of getting better,” says Gage. “In every game I’ve made, the most important questions have been: What’s the journey that people are going through and how can we make that journey fun? And it turns out it's easy to discover that if I've never played a game before.”
Find Knotwords+ on Apple Arcade
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
WWDC24: June 10-14
Join the worldwide developer community online for a week of technology and creativity.
Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.
Provide your trader status in App Store Connect
To align with the Digital Services Act (DSA) in the European Union (EU), Account Holders and Admins in the Apple Developer Program can now enter their trader status in App Store Connect.
Submission requirementsYou’ll need to let us know whether or not you’re a trader to submit new apps to the App Store. If you’re a trader, you may be asked for documentation that verifies your trader contact information.
More options for apps distributed in the European Union
We’re providing more flexibility for developers who distribute apps in the European Union (EU), including introducing a new way to distribute apps directly from a developer’s website.
More flexibilityDevelopers who’ve agreed to the Alternative Terms Addendum for Apps in the EU have new options for their apps in the EU:
- Alternative app marketplaces. Marketplaces can choose to offer a catalog of apps solely from the developer of the marketplace.
- Linking out to purchase. When directing users to complete a transaction for digital goods or services on an external webpage, developers can choose how to design promotions, discounts, and other deals. The Apple-provided design templates, which are optimized for key purchase and promotional use cases, are now optional.
Web Distribution, available with a software update later this spring, will let authorized developers distribute their iOS apps to EU users directly from a website owned by the developer. Apple will provide authorized developers access to APIs that facilitate the distribution of their apps from the web, integrate with system functionality, back up and restore users’ apps, and more. For details, visit Getting ready for Web Distribution in the EU.
Uncovering the hidden joys of Finding Hannah
On its surface, Finding Hannah is a bright and playful hidden-object game — but dig a little deeper and you’ll find something much more.
The Hannah of Finding Hannah is a 38-year-old Berlin resident trying to navigate career, relationships (including with her best friend/ex, Emma), and the nagging feeling that something’s missing in her life. To help find answers, Hannah turns to her nurturing grandmother and free-spirited mother — whose own stories gradually come into focus and shape the game’s message as well.
“It’s really a story about three women from three generations looking for happiness,” says Franziska Zeiner, cofounder and co-CEO of the Fein Games studio. “For each one, times are changing. But the question is: Are they getting better?”
Locate hidden objects in this lively Berlin subway scene to move along the story of Finding Hannah.
To move the story along, players comb through a series of richly drawn scenes — a packed club, a bustling train, a pleasantly cluttered bookstore. Locating (and merging) hidden items unlocks new chapters, and the more you find, the more the time-hopping story unfolds. The remarkable mix of message and mechanic made the game a 2023 Apple Design Award finalist, as well as a Cultural Impact winner in the 2023 App Store Awards.
Fein Games is the brainchild of Zeiner and Lea Schönfelder, longtime friends from the same small town in Germany who both pursued careers in game design — despite not being all that into video games growing up. “I mean, at some point I played The Sims as a teenager,” laughs Zeiner, “but games were rare for us. When I eventually went to study game design, I felt like I didn’t really fit in, because my game literacy was pretty limited.”
The goal is to create for people who enjoy authentic female experiences in games.
Lea Schönfelder, cofounder and co-CEO of Fein Games
Cofounder and co-CEO Schönfelder also says she felt like an outsider, but soon found game design a surprisingly organic match for her background in illustration and animation. “In my early years, I saw a lot of people doing unconventional things with games and thought, ‘Wow, this is really powerful.’ And I knew I loved telling stories, maybe not in a linear form but a more systematic way.” Those early years included time with studios like Nerial and ustwo Games, where she worked on Monument Valley 2 and Assemble With Care.
Drawing on their years of experience — and maybe that shared unconventional background — the pair went out on their own to launch Fein Games in 2020. From day one, the studio was driven by more than financial success. “The goal is to create for people who enjoy authentic female experiences in games,” says Schönfelder. “But the product is only one side of the coin — there’s also the process of how you create, and we’ve been able to make inclusive games that maybe bring different perspectives to the world.”
Hannah and her free-spirited mother, Sigrid, share an uncomfortable conversation.
Finding Hannah was driven by those perspectives from day one. The story was always meant to be a time-hopping journey featuring women in Berlin, and though it isn’t autobiographical, bits and pieces do draw from their creators’ lives. “There’s a scene inspired by my grandmother, who was a nurse during the second world war and would tan with her friends on a hospital roof while the planes circled above,” says Schönfelder. The script was written by Berlin-based author Rebecca Harwick, who also served as lead writer on June’s Journey and writer on Switchcraft, The Elder Scrolls Online, and many others.
In the beginning, I felt like I wasn’t part of the group, and maybe even a little ashamed that I wasn’t as games-literate as my colleagues. But what I thought was a weakness was actually a strength.
Lea Schönfelder, cofounder and co-CEO of Fein Games
To design the art for the different eras, the team tried not to think like gamers. “The idea was to try to reach people who weren’t gamers yet, and we thought we’d most likely be able to do that if we found a style that hadn’t been seen in games before,” says Zeiner. To get there, they hired Elena Resko, a Russian-born artist based in Berlin who’d also never worked in games. “What you see is her style,” says Schönfelder. “She didn’t develop that for the game. I think that’s why it has such a deep level of polish, because Elena has been developing her style for probably a decade now.”
And the hidden-object and merge gameplay mechanic itself is an example of sticking with a proven success. “When creating games, you usually want to invent a new mechanic, right?” says Schönfelder. “But Finding Hannah is for a more casual audience. And it’s been proven that the hidden-object mechanic works. So we eventually said, ‘Well, maybe we don’t need to reinvent the wheel here,’” she laughs.
The scene in which Hannah’s grandmother sits with friends on the roof was inspired by Lea Schönfelder’s grandmother.
The result is a hidden-object game like none other, part puzzler, part historically flavored narrative, part meditation on the choices faced by women across generations. And it couldn’t have come from a team with any other background. “In the beginning, I felt like I wasn’t part of the group, and maybe even a little ashamed that I wasn’t as games-literate as my colleagues,” says Schönfelder. “But what I thought was a weakness was actually a strength. Players don’t always play your game like you intended. And I felt a very strong, very sympathetic connection to people, and wanted to make the experience as smooth and accessible as possible. And I think that shows.”
Learn more about Finding Hannah
Download Finding Hannah from the App Store
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Q&A with the Mac notary service team
Security is at the core of every Apple platform. The Mac notary service team is part of Apple Security Engineering and Architecture, and in this Q&A, they share their tips on app distribution and account security to help Mac developers have a positive experience — and protect their users.
When should I submit my new app for notarization?Apps should be mostly complete at the time of notarization. There’s no need to notarize an app that isn’t functional yet.
How often should I submit my app for notarization?You should submit all versions you might want to distribute, including beta versions. That’s because we build a profile of your unique software to help distinguish your apps from other developers’ apps, as well as malware. As we release new signatures to block malware, this profile helps ensure that the software you’ve notarized is unaffected.
What happens if my app is selected for additional analysis?Some uploads to the notary service require additional evaluation. If your app falls into this category, rest assured that we’ve received your file and will complete the analysis, though it may take longer than usual. In addition, if you’ve made changes to your app while a prior upload has been delayed, it’s fine to upload a new build.
What should I do if my app is rejected?Keep in mind that empty apps or apps that might damage someone’s computer (by changing important system settings without the owner’s knowledge, for instance) may be rejected, even if they’re not malicious. If your app is rejected, first confirm that your app doesn’t contain malware. Then determine whether it should be distributed privately instead, such as within your enterprise via MDM.
What should I do if my business changes?Keep your developer account details — including your business name, contact info, address, and agreements — up to date. Drastic shifts in account activity or software you notarize can be signs that your account or certificate has been compromised. If we notice this type of activity, we may suspend your account while we investigate further.
I’m a contractor. What are some ways to make sure I’m developing responsibly?Be cautious if anyone asks you to:
- Sign, notarize, or distribute binaries that you didn’t develop.
- Develop software that appears to be a clone of existing software.
- Develop what looks like an internal enterprise application when your customer isn’t an employee of that company.
- Develop software in a high-risk category, like VPNs, system utilities, finance, or surveillance apps. These categories of software have privileged access to private data, increasing the risk to users.
Remember: It’s your responsibility to know your customer and the functionality of all software you build and/or sign.
What can I do to maintain control of my developer account?Since malware developers may try to gain access to legitimate accounts to hide their activity, be sure you have two-factor authentication enabled. Bad actors may also pose as consultants or employees and ask you to add them to your developer team. Luckily, there’s an easy solve: Don’t share access to your accounts.
Should I remove access for developers who are no longer on my team?Yes. And we can revoke Developer ID certificates for you if you suspect they may have been compromised.
Learn more about notarizationNotarizing macOS software before distribution
Hello Developer: March 2024
Welcome to Hello Developer. In this edition:
- Find out what you can do at the Apple Developer Centers in Bengaluru, Cupertino, Shanghai, and Singapore.
- Learn how the team behind Finding Hannah created a hidden-object game with a meaningful message.
- Get security tips from the Mac notary service team.
- Catch up on the latest news and documentation.
FEATURED
Step inside the Apple Developer CentersThe new Apple Developer Centers are open around the world — and we can’t wait for you to come by. With locations in Bengaluru, Cupertino, Shanghai, and now Singapore, Apple Developer Centers are the home bases for in-person sessions, labs, workshops, and consultations around the world.
Whether you’re looking to enhance your existing app or game, refine your design, or launch a new project, there’s something exciting for you at the Apple Developer Centers. Browse activities in Bengaluru, Cupertino, Shanghai, and Singapore.
BEHIND THE DESIGN
Uncover the hidden joys of Finding HannahOn its surface, Finding Hannah is a bright and playful hidden-object game — but dig a little deeper and you’ll find something more. “It’s really a story about three women from three generations looking for happiness,” says Franziska Zeiner, cofounder and co-CEO of the Fein Games studio. “For each one, times are changing. But the question is: Are they getting better?” Find out how Zeiner and her Berlin-based team created this compelling Apple Design Award finalist.
Uncovering the hidden joys of Finding Hannah View nowQ&A
Get answers from the Mac notary service teamSecurity is at the core of every Apple platform. The Mac notary service team is part of Apple Security Engineering and Architecture, and in this Q&A, they share their tips on app distribution and account security to help Mac developers have a positive experience — and protect their users.
Q&A with the Mac notary service team View nowVIDEOS
Improve your subscriber retention with App Store featuresIn this new video, App Store experts share their tips for minimizing churn and winning back subscribers.
Improve your subscriber retention with App Store features Watch nowGROW YOUR BUSINESS
Make the most of custom product pagesLearn how you can highlight different app capabilities and content through additional (and fully localizable) versions of your product page. With custom product pages, you can create up to 35 additional versions — and view their performance data in App Store Connect.
Plus, thanks to seamless integration with Apple Search Ads, you can use custom product pages to easily create tailored ad variations on the App Store. Read how apps like HelloFresh, Pillow, and Facetune used the feature to gain performance improvements, like higher tap-through and conversion rates.
DOCUMENTATION
Find the details you need in new and updated docs- Create complex materials and effects for 3D content with Shader Graph, a node-based material editor in Reality Composer Pro.
- Use SwiftData to add persistence to your app with minimal code and no external dependencies. Check out new documentation on classes, macros, and structures.
- Learn how to share configurations across Xcode Cloud workflows.
- Explore HIG updates about visionOS support, including new details on immersive experiences, the virtual keyboard, layout, color, and motion.
- New in Technotes: Learn how to identify and handle CloudKit throttles. Plus, find out how to recognize and resolve synchronization issues when working with NSPersistentCloudKitContainer, and how to explore details inside the container by capturing and analyzing a sysdiagnose.
View the full list of new resources
NEWS
Catch up on the latest updates- App Store Connect upload requirement: Starting April 29, 2024, uploaded apps must be built with Xcode 15 for iOS 17, iPadOS 17, tvOS 17, or watchOS 10.
- Updates to support app distribution changes in the European Union: Learn how we’re continuing to provide new ways to understand and utilize these changes.
- App Store Connect update: Learn about changes to app statuses and support for features related to alternative app distribution in the EU.
- App Store Connect API 3.3: Manage distribution keys, alternative distribution packages, and marketplace search for alternative app distribution in the EU.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
New App Store and iOS data analytics now available
We’re expanding the analytics available for your apps to help you get even more insight into your business and apps’ performance.
Over 50 new reports are now available through the App Store Connect API to help you analyze your apps’ App Store and iOS performance. These reports include hundreds of new metrics that can enable you to evaluate your performance and find opportunities for improvement. Reports are organized into the following categories:
- App Store Engagement — the number of users on the App Store interacting with a developer’s app or sharing it with others
- App Store Commerce — downloads, sales, pre-orders, and transactions made with the secure App Store In-App Purchase system
- App Usage — active devices, installs, app deletions, and more
- Frameworks Usage — an app’s interaction with OS capabilities, such as PhotoPicker and Widgets
- Performance — how your apps perform and how users interact with specific features
Additionally, new reports are also available through the CloudKit console with data about Apple Push Notifications and File Provider.
- Apple Push Notifications — notification states as they pass through the Apple Push Notification service (APNs)
- File Provider — usage, consistency, and error data
Updates to app distribution in the European Union
Over the past several weeks, we’ve communicated with thousands of developers to discuss DMA-related changes to iOS, Safari, and the App Store impacting apps in the European Union. As a result of the valuable feedback received, we’ve revised the Alternative Terms Addendum for Apps in the EU to update the following policies and provide developers more flexibility:
- Decisioning by membership: To make it easier for more developers to sign up for the new terms, we’ve removed the corporate entity requirement that the Addendum must be signed by each membership that controls, is controlled by, or is under control with another membership. This means an entity can now choose to sign up for the new terms at the developer account level.
- Switching back: To help reduce the risk of unexpected business changes under the new terms, such as reaching massive scale more quickly than anticipated, or if you simply change your mind, we’ve created a one-time option to terminate the Addendum under certain circumstances and switch back to Apple’s standard business terms for your EU apps. For details, view the Addendum.
- Alternative app marketplace requirements: To make it easier for developers who want to create alternative app marketplaces, we’ve added a new eligibility criteria that lets developers qualify without a stand-by letter of credit. For details, view the marketplace support page.
If you’ve already entered into the Addendum, you can sign the updated version here.
The latest OS Release Candidates are now available
You can now submit your apps and games built with Xcode 15.3 and all the latest SDKs for iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, visionOS 1.1, and watchOS 10.4.
Developers who have agreed to the Alternative Terms Addendum for Apps in the EU can now submit apps offering alternative payment options in the EU. They can also now measure the number of first annual installs their apps have accumulated.
If you’d like to discuss changes to iOS, Safari, and the App Store impacting apps in the EU to comply with the Digital Markets Act, request a 30-minute online consultation with an Apple team member.
Updated App Review Guidelines now available
The App Store Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification.
- The title of the document has been changed to App Review Guidelines.
- The Introduction section explains that in the European Union, developers can also distribute notarized iOS apps from alternative app marketplaces. This section provides links to further information about alternative app marketplaces and Notarization for iOS apps.
The following guidelines have been updated:
- 2.3.1: Added that a violation of this rule is grounds for an app being blocked from installing via alternative distribution.
- 2.3.10: Added that developers cannot include names, icons, or imagery of other mobile platforms or alternative app marketplaces in their apps or metadata, unless there is specific, approved interactive functionality.
- 3.1.3(b): Added a link to 3.1.1 to make clear that 3.1.1(a) applies, and multiplatform services apps can use the 3.1.1(a) entitlement.
- 4.8 Login Services: Updated to make clear that the login service cannot collect interactions with your app for advertising purposes without consent. It also adds that another login service is not required if your app is an alternative app marketplace, or an app distributed from an alternative app marketplace, that uses a marketplace-specific login for account, download, and commerce features.
- 5.1.1(viii): Added that apps that compile personal information from any source that is not directly from the user or without the user’s explicit consent, even public databases, are not permitted on alternative app marketplaces.
- 5.4 and 5.5: Updated to state that apps that do not comply with these guidelines will be blocked from installing via alternative distribution.
- Bug Fix Submissions: Added that bug fixes will not be delayed for apps that are already on alternative app marketplaces, except for those related to legal or safety issues.
View the App Review Guidelines
Translations of the guidelines will be available on the Apple Developer website within one month.
Privacy updates for App Store submissions
Developers are responsible for all code included in their apps. At WWDC23, we introduced new privacy manifests and signatures for commonly used third-party SDKs and announced that developers will need to declare approved reasons for using a set of APIs in their app’s privacy manifest. These changes help developers better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users.
Starting March 13: If you upload a new or updated app to App Store Connect that uses an API requiring approved reasons, we’ll send you an email letting you know if you’re missing reasons in your app’s privacy manifest. This is in addition to the existing notification in App Store Connect.
Starting May 1: You’ll need to include approved reasons for the listed APIs used by your app’s code to upload a new or updated app to App Store Connect. If you’re not using an API for an allowed reason, please find an alternative. And if you add a new third-party SDK that’s on the list of commonly used third-party SDKs, these API, privacy manifest, and signature requirements will apply to that SDK. Make sure to use a version of the SDK that includes its privacy manifest and note that signatures are also required when the SDK is added as a binary dependency.
This functionality is a step forward for all apps and we encourage all SDKs to adopt it to better support the apps that depend on them.
App submissions now open for the latest OS releases
Submit in App Store Connect
iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, visionOS 1.1, and watchOS 10.4 will soon be available to customers worldwide. Build your apps and games using the Xcode 15.3 Release Candidate and latest SDKs, then test them using TestFlight. You can submit your iPhone and iPad apps today.
Apps in the European UnionDevelopers who’ve agreed to the Alternative Terms Addendum for Apps in the EU can set up marketplace distribution in the EU. Eligible developers can also submit marketplace apps and offer apps with alternative browser engines.
Once these platform versions are publicly available:
- First annual installs for the Core Technology Fee begin accruing and the new commission rates take effect for these developers.
- Apps offering alternative payment options in the EU will be accepted in App Store Connect. In the meantime, you can test in the sandbox environment.
If you’d like to discuss changes to iOS, Safari, and the App Store impacting apps in the EU to comply with the Digital Markets Act, request a 30-minute online consultation to meet with an Apple team member. In addition, if you’re interested in getting started with operating an alternative app marketplace on iOS in the EU, you can request to attend an in-person lab in Cork, Ireland.
Developer activities you’ll love
Apple developer activities are in full swing. Here’s a look at what’s happening:
- Join an online session to learn to minimize churn and win back subscribers hosted by App Store experts.
- Celebrate International Women’s Day with special in-person activities in Bengaluru, Cupertino, Shanghai, Singapore, Sydney, and Tokyo.
- Visit an Apple Vision Pro developer lab in Cupertino, London, Munich, Singapore, Sydney, or Tokyo to test and refine your apps for the infinite canvas.
- Meet with an Apple team member to discuss changes to iOS, Safari, and the App Store impacting apps in the European Union to comply with the Digital Markets Act.
And we’ll have lots more activities in store — online, in person, and in multiple languages — all year long.
Q&A with the Apple UX writing team
Writing is fundamental — especially in your apps and games, where the right words can have a profound impact on your experience. During WWDC23, the Apple UX writing team hosted a wide-ranging Q&A that covered everything from technical concepts to inspiring content to whether apps should have “character.” Here are some highlights from that conversation and resources to help you further explore writing for user interfaces.
Writing for interfaces Watch now My app has a lot of text. What’s the best way to make copy easier to read?Ask yourself: What am I trying to accomplish with my writing? Once you’ve answered that, you can start addressing the writing itself. First, break up your paragraphs into individual sentences. Then, go back and make each sentence as short and punchy as possible. To go even further, you can start each sentence the same way — like with a verb — or add section headers to break up the copy. Or, to put it another way:
Break up your paragraphs into individual sentences.
Make each sentence as short and punchy as possible.
Start each sentence the same way — like with a verb.
Keep other options in mind too. Sometimes it might be better to get your point across with a video or animation. You might also put a short answer first and expand on it elsewhere. That way, you’re helping people who are new to your app while offering a richer option for those who want to dive a little deeper.
What’s your advice for explaining technical concepts in simple terms?First, remember that not everyone will have your level of understanding. Sometimes we get so excited about technical details that we forget the folks who might be using an app for the first time.
Try explaining the concept to a friend or colleague first — or ask an engineer to give you a quick summary of a feature.
From there, break down your idea into smaller components and delete anything that isn’t absolutely necessary. Technical concepts can feel even more intimidating when delivered in a big block of text. Can you link to a support page? Do people need that information in this particular moment? Offering small bits of information is always a good first step.
How can I harness the “less is more” concept without leaving people confused?Clarity should always be the priority. The trick is to make something as long as it needs to be, but as short as it can be. Start by writing everything down — and then putting it away for a few days. When you come back to it, you’ll have a clearer perspective on what can be cut.
One more tip: Look for clusters of short words — those usually offer opportunities to tighten things up.
How should I think about writing my onboarding?Naturally, this will depend on your app or game — you’ll have to figure out what’s necessary and right for you. But typically, brevity is key when it comes to text — especially at the beginning, when people are just trying to get into the experience.
Consider providing a brief overview of high-level features so people know why they should use your app and what to expect while doing so. Also, think about how they got there. What text did they see before opening your app? What text appeared on the App Store? All of this contributes to the overall journey.
Human Interface Guidelines: Onboarding
Should UX writing have a personal tone? Or does that make localization too difficult?When establishing your voice and tone, you should absolutely consider adding elements of personality to get the elusive element of “character.” But you're right to consider how your strings will localize. Ideally, you’ll work with your localization partners for this. Focus on phrases that strike the tone you want without resorting to idioms. And remember that a little goes a long way.
How should I approach writing inclusively, particularly in conveying gender?This is an incredibly important part of designing for everyone. Consider whether specifying gender is necessary for the experience you’re creating. If gender is necessary, it’s helpful to provide a full set of options — as well as an option to decline the question. Many things can be written without alluding to gender at all and are thus more inclusive. You can also consider using glyphs. SF Symbols provides lots of inclusive options. And you can find more guidance about writing inclusively in the Human Interface Guidelines.
Human Interface Guidelines: Inclusion
What are some best practices for writing helpful notifications?First, keep in mind that notifications can feel inherently interruptive — and that people receive lots of them all day long. Before you write a notification at all, ask yourself these questions:
- Does the message need to be sent right now?
- Does the message save someone from opening your app?
- Does the message convey something you haven’t already explained?
If you answered yes to all of the above, learn more about notification best practices in the Human Interface Guidelines.
Human Interface Guidelines: Notifications
Can you offer guidance on writing for the TipKit framework?With TipKit — which displays tips that help people discover features in your app — concise writing is key. Use tips to highlight a brand-new feature in your app, help people discover a hidden feature, or demonstrate faster ways to accomplish a task. Keep your tips to just one idea, and be as clear as possible about the functionality or feature you’re highlighting.
What’s one suggestion you would give writers to improve their content?One way we find the perfect (or near-perfect) sentence is to show it to other people, including other writers, designers, and creative partners. If you don’t have that option, run your writing by someone else working on your app or even a customer. And you can always read out loud to yourself — it’s an invaluable way to make your writing sound conversational, and a great way to find and cut unnecessary words.
Hello Developer: February 2024
Welcome to the first Hello Developer of the spatial computing era. In this edition: Join us to celebrate International Women’s Day all over the world, find out how the Fantastical team brought their app to life on Apple Vision Pro, get UX writing advice straight from Apple experts, and catch up on the latest news and documentation.
FEATURED
Join us for International Women's Day celebrationsThis March, we’re honoring International Women’s Day with developer activities all over the world. Celebrate and elevate women in app development through a variety of sessions, panels, and performances.
FEATURED
“The best version we’ve ever made”: Fantastical comes to Apple Vision ProThe best-in-class calendar app Fantastical has 11 years of history, a shelf full of awards, and plenty of well-organized fans on iPad, iPhone, Mac, and Apple Watch. Yet Fantastical’s Michael Simmons says the app on Apple Vision Pro is “the best version we’ve ever made.” Find out what Simmons learned while building for visionOS — and what advice he’d give fellow developers bringing their apps to Apple Vision Pro.
“The best version we’ve ever made”: Fantastical comes to Apple Vision Pro View nowQ&A
Get advice from the Apple UX writing teamWriting is fundamental — especially in your apps and games, where the right words can have a profound impact on your app’s experience. During WWDC23, the Apple UX writing team hosted a wide-ranging Q&A that covered everything from technical concepts to inspiring content to whether apps should have “character.”
Q&A with the Apple UX writing team View nowNEWS
Download the Apple Developer app on visionOSApple Developer has come to Apple Vision Pro. Experience a whole new way to catch up on WWDC videos, browse news and features, and stay up to date on the latest Apple frameworks and technologies.
Download Apple Developer from the App Store
VIDEOS
Dive into Xcode Cloud, Apple Pay, and network selectionThis month’s new videos cover a lot of ground. Learn how to connect your source repository with Xcode Cloud, find out how to get started with Apple Pay on the Web, and discover how your app can automatically select the best network for an optimal experience.
Connect your project to Xcode Cloud Watch now Get started with Apple Pay on the Web Watch now Adapt to changing network conditions Watch nowBEHIND THE DESIGN
Rebooting an inventive puzzle game for visionOSBringing the mind-bending puzzler Blackbox to Apple Vision Pro presented Ryan McLeod with a challenge and an opportunity like nothing he'd experienced before. Find out how McLeod and team are making the Apple Design Award-winning game come to life on the infinite canvas. Then, catch up on our Apple Vision Pro developer interviews and Q&As with Apple experts.
Blackbox: Rebooting an inventive puzzle game for visionOS View now Apple Vision Pro developer stories and Q&As View nowMEET WITH APPLE EXPERTS
Sign up for developer activitiesThis month, you can learn to minimize churn and win back subscribers in an online session hosted by App Store experts, and meet with App Review to explore best practices for a smooth review process. You can also request to attend an in-person lab in Cork, Ireland, to help develop your alternative app marketplace on iOS in the European Union. View the full schedule of activities.
DOCUMENTATION
Explore and create with new and updated docs- Track specific points in world space: In this new sample app, you’ll learn to use world anchors along with an ARKit session’s WorldTrackingProvider to create coherence and continuity in a 3D world.
- Explore over 400 newly localized SF symbols: Download the latest version of SF Symbols to browse the updates.
- Preview your app's interface in Xcode: Iterate designs quickly and preview your displays across Apple devices.
- Set up or add a Border Router to your Thread network: Configure a Border Router as a bridge between the Thread and Wi-Fi or Ethernet networks in a home.
View the full list of new resources.
Discover what’s new in the Human Interface Guidelines.
NEWS
Catch up on the latest updates- Swift Student Challenge applications are open: Learn about past Challenge winners and get everything you need to create an awesome app playground.
- App Store Connect API 3.2: Manage your apps on the App Store for Apple Vision Pro and download new Sales and Trends install reports, including information about historical first annual installs.
- New StoreKit entitlement: If your app offers in-app purchases on the App Store for iPhone or iPad in the United States, you can include a link to your website to let people know of other ways to purchase your digital goods or services.
- New reports and sign-in options: You’ll soon be able to view over 50 new reports to help measure your apps’ performance. And you can take advantage of new flexibility when asking users to sign in to your app.
- App distribution in the European Union: We’re sharing some changes to iOS, Safari, and the App Store, impacting developers’ apps in the EU to comply with the Digital Markets Act.
- App Store Review Guideline update: Check out the latest changes to support updated policies and provide clarification.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
“The best version we’ve ever made”: Fantastical comes to Apple Vision Pro
The best-in-class calendar app Fantastical has more than a decade of history, a shelf full of awards, and plenty of well-organized fans on iPad, iPhone, Mac, and Apple Watch. Yet Michael Simmons, CEO and lead product designer for Flexibits, the company behind Fantastical, says the Apple Vision Pro app is “the best version we’ve ever made.” We asked Simmons about what he’s learned while building for visionOS, his experiences visiting the developer labs, and what advice he’d give fellow developers bringing their apps to Vision Pro.
What was your initial approach to bringing Fantastical from iPad to Apple Vision Pro?The first thing we did was look at the platform to see if a calendar app made sense. We thought: “Could we do something here that’s truly an improvement?” When the answer was yes, we moved on to, “OK, what are the possibilities?” And of course, visionOS gives you unlimited possibilities. You’re not confined to borders; you have the full canvas of the world to create on.
We wanted to take advantage of that infinite canvas. But we also needed to make sure Fantastical felt right at home in visionOS. People want to feel like there’s a human behind the design — especially in our case, where some customers have been with us for almost 13 years. There’s a legacy there, and an expectation that what you’ll see will feel connected to what we’ve done for more than a decade.
I play guitar, so to me it felt like learning an instrument.
Michael Simmons, CEO and lead product designer for Flexibits
In the end, it all felt truly seamless, so much so that once Fantastical was finished, we immediately said, “Well, let’s do [the company’s contacts app] Cardhop too!”
Was there a moment when you realized, “We’ve really got something here”?It happened as instantly as it could. I play guitar, so to me it felt like learning an instrument. One day it just clicks — the songs, the notes, the patterns — and feels like second nature. For me, it felt like those movies where a musical prodigy feels the music flowing out of them.
How did you approach designing for visionOS?We focused a lot on legibility of the fonts, buttons, and other screen elements. The opaque background didn’t play well with elements from other operating systems, for example, so we tweaked it. We stayed consistent with design language, used system-provided colors as much as possible, built using mainly UIKit, and used SwiftUI for ornaments and other fancy Vision Pro elements. It’s incredible how great the app looked without us needing to rewrite a bunch of code.
How long did the process take?It was five months from first experiencing the device to submitting a beautiful app. Essentially, that meant three months to ramp up — check out the UI, explore what was doable, and learn the tools and frameworks — and two more months to polish, refine, and test. That’s crazy fast! And once we had that domain knowledge, we were able to do Cardhop in two months. So I’d say if you have an iPad app and that knowledge, it takes just months to create a Apple Vision Pro version of your app.
What advice would you give to other developers looking to bring their iPhone or iPad apps to Apple Vision Pro?Make sure your app is appropriate for the platform. Look at the device — all of its abilities and possibilities — and think about how your app would feel with unlimited real estate. And if your app makes sense — and most apps do make sense — and you’re already developing for iPad, iPhone, or Mac, it’s a no-brainer to bring it to Apple Vision Pro.
Updates to support app distribution changes in the European Union
We recently announced changes to iOS, Safari, and the App Store impacting developers’ apps in the European Union (EU) to comply with the Digital Markets Act (DMA), supported by more than 600 new APIs, a wide range of developer tools, and related documentation.
And we’re continuing to provide new ways for developers to understand and utilize these changes, including:
- Online consultations to discuss alternative distribution on iOS, alternative payments on the App Store, linking out to purchase on their webpage, new business terms, and more.
- Labs to help develop alternative app marketplaces on iOS.
Developers who have agreed to the new business terms can now use new features in App Store Connect and the App Store Connect API to set up marketplace distribution and marketplace apps, and use TestFlight to beta test these features. TestFlight also supports apps using alternative browser engines, and alternative payments through payment service providers and linking out to a webpage.
And soon, you’ll be able to view expanded app analytics reports for the App Store and iOS.
App Store Connect upload requirement starts April 29
Apps uploaded to App Store Connect must be built with Xcode 15 for iOS 17, iPadOS 17, tvOS 17, or watchOS 10, starting April 29, 2024.
Apply for the Swift Student Challenge now through February 25
Every year, the Swift Student Challenge aims to inspire students to create amazing app playgrounds that can make life better for their communities — and beyond.
Have an app idea that’s close to your heart? Now’s your chance to make it happen. Build an app playground and submit by February 25.
All winners receive a year of complimentary membership in the Apple Developer Program and other exclusive awards. And for the first time ever, we’ll award a select group of Distinguished Winners a trip to Apple Park for an incredible in-person experience.
Request a consultation about the changes to apps distributed in the European Union
Meet with an Apple team member to discuss changes to iOS, Safari, and the App Store impacting apps in the European Union to comply with the Digital Markets Act. Topics include alternative distribution on iOS, alternative payments in the App Store, linking out to purchase on your webpage, new business terms, and more.
Request a 30-minute online consultation to ask questions and provide feedback about these changes.
In addition, if you’re interested in getting started with operating an alternative app marketplace on iOS in the European Union, you can request to attend an in-person lab in Cork, Ireland.
Blackbox: Rebooting an inventive puzzle game for visionOS
If you’ve ever played Blackbox, you know that Ryan McLeod builds games a little differently.
In the inventive iOS puzzler from McLeod’s studio, Shapes & Stories, players solve challenges not by tapping or swiping but by rotating the device, plugging in the USB cable, singing a little tune — pretty much everything except touching the screen.
“The idea was to get people in touch with the world outside their device,” says McLeod, while ambling along the canals of his Amsterdam home base.
I’m trying to figure out what makes Blackbox tick on iOS, and how to bring that to visionOS. That requires some creative following of my own rules — and breaking some of them.
Ryan McLeod
In fact, McLeod freed his puzzles from the confines of a device screen well before Apple Vision Pro was even announced — which made bringing the game to this new platform a fascinating challenge. On iOS and iPadOS, Blackbox plays off the familiarity of our devices. But how do you transpose that experience to a device people haven’t tried yet? And how do you break boundaries on a canvas that doesn’t have any? “I do love a good constraint,” says McLeod, “but it has been fun to explore the lifting of that restraint. I’m trying to figure out what makes Blackbox tick on iOS, and how to bring that to visionOS. That requires some creative following of my own rules — and breaking some of them.”
After a brief onboarding, the game becomes an all-new visionOS experience that takes advantage of the spatial canvas right from the first level selection. “I wanted something a little floaty and magical, but still grounded in reality,” he says. “I landed on the idea of bubbles. They’re like soap bubbles: They’re natural, they have this hyper-realistic gloss, and they move in a way you’re familiar with. The shader cleverly pulls the reflection of your world into them in this really believable, intriguing way.”
And the puzzles within those bubbles? “Unlike Blackbox on iOS, you’re not going to play this when you’re walking home from school or waiting in line,” McLeod says. “It had to be designed differently. No matter how exciting the background is, or how pretty the sound effects are, it’s not fun to just stare at something, even if it’s bobbing around really nicely.”
Ryan McLeod’s notebook shows pen sketches of what will become Blackbox on Apple Vision Pro.
Now, McLeod cautions that Blackbox is still very much a work in progress, and we’re certainly not here to offer any spoilers. But if you want to go in totally cold, it might be best to skip this next part.
In Blackbox, players interact with the space — and their own senses — to explore and solve challenges. One puzzle involves moving your body in a certain manner; another involves sound, silence, and a blob of molten gold floating like an alien in front of you. A second puzzle involves Morse code. And solving a third puzzle causes part of the scene to collapse into a portal. “Spatial Audio makes the whole thing kind of alarming but mesmerizing,” he says.
There's an advantage to not knowing expected or common patterns.
Ryan McLeod
It's safe to say Blackbox will continue evolving, especially since McLeod is essentially building this plane as he’s flying it — something he views as a positive. “There’s an advantage to not knowing expected or common patterns,” he says. “There’s just so much possibility.”
Apple Vision Pro developer stories and Q&As
Meet some of the incredible teams building for visionOS, and get answers from Apple experts on spatial design and creating great apps for Apple Vision Pro.
Developer stories “The best version we’ve ever made”: Fantastical comes to Apple Vision Pro View now Blackbox: Rebooting an inventive puzzle game for visionOS View now “The full impact of fruit destruction”: How Halfbrick cultivated Super Fruit Ninja on Apple Vision Pro View now Realizing their vision: How djay designed for visionOS View now JigSpace is in the driver’s seat View now PTC is uniting the makers View now Q&As Q&A: Spatial design for visionOS View now Q&A: Building apps for visionOS View nowPrice and tax updates for apps, in-app purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help ensure that prices for apps and in-app purchases remain consistent across all storefronts.
Price updatesOn February 13, pricing for apps and in-app purchases* will be updated for the Benin, Colombia, Tajikistan, and Türkiye storefronts. Also, these updates consider the following tax changes:
- Benin: value-added tax (VAT) introduction of 18%
- Tajikistan: VAT rate decrease from 15% to 14%
Prices will be updated on the Benin, Colombia, Tajikistan, and Türkiye storefronts if you haven’t selected one of these as the base for your app or in‑app purchase.*
Prices won’t change on the Benin, Colombia, Tajikistan, or Türkiye storefront if you’ve selected that storefront as the base for your app or in-app purchase.* Prices on other storefronts will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your in‑app purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of My Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, in‑app purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your pricesView or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by region
Set a price for an in-app purchase
Tax updatesYour proceeds for sales of apps and in-app purchases will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Benin.
On January 30, your proceeds from the sale of eligible apps and in‑app purchases were modified in the following countries to reflect introductions or changes in VAT rates.
- Benin: VAT introduction of 18%
- Czechia: VAT rate decreased from 10% to 0% for certain eBooks and audiobooks
- Czechia: VAT rate increased from 10% to 12% for certain eNewspapers and Magazines
- Estonia: VAT rate increased from 20% to 22%
- Ireland: VAT rate decreased from 9% to 0% for certain eBooks and audiobooks
- Luxembourg: VAT rate increased from 16% to 17%
- Singapore: GST rate increased from 8% to 9%
- Switzerland: VAT rate increased from 2.5% to 2.6% for certain eNewspapers, magazines, books and audiobooks
- Switzerland: VAT rate increased from 7.7% to 8.1% for all other apps and in-app purchases
- Tajikistan: VAT rate decreased from 15% to 14%
*Excludes auto-renewable subscriptions.
Get ready with the latest beta releases
The beta versions of iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, and watchOS 10.4 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.3 beta.
Apple introduces new options worldwide for streaming game services and apps that provide access to mini apps and games
New analytics reports coming in March for developers everywhere
Developers can also enable new sign-in options for their apps
Today, Apple is introducing new options for how apps globally can deliver in-app experiences to users, including streaming games and mini-programs. Developers can now submit a single app with the capability to stream all of the games offered in their catalog.
Apps will also be able to provide enhanced discovery opportunities for streaming games, mini-apps, mini-games, chatbots, and plug-ins that are found within their apps.
Additionally, mini-apps, mini-games, chatbots, and plug-ins will be able to incorporate Apple’s In-App Purchase system to offer their users paid digital content or services for the first time, such as a subscription for an individual chatbot.
Each experience made available in an app on the App Store will be required to adhere to all App Store Review Guidelines and its host app will need to maintain an age rating of the highest age-rated content included in the app.
The changes Apple is announcing reflect feedback from Apple’s developer community and is consistent with the App Store’s mission to provide a trusted place for users to find apps they love and developers everywhere with new capabilities to grow their businesses. Apps that host this content are responsible for ensuring all the software included in their app meets Apple’s high standards for user experience and safety.
New app analyticsApple provides developers with powerful dashboards and reports to help them measure their apps’ performance through App Analytics, Sales and Trends, and Payments and Financial Reports. Today, Apple is introducing new analytics for developers everywhere to help them get even more insight into their businesses and their apps’ performance, while maintaining Apple’s long-held commitment to ensure users are not identifiable at an individual level.
Over 50 new reports will be available through the App Store Connect API to help developers analyze their app performance and find opportunities for improvement with more metrics in areas like:
Engagement — with additional information on the number of users on the App Store interacting with a developer’s app or sharing it with others;
Commerce — with additional information on downloads, sales and proceeds, pre-orders, and transactions made with the App Store’s secure In-App Purchase system;
App usage — with additional information on crashes, active devices, installs, app deletions, and more.
Frameworks usage — with additional information on an app’s interaction with OS functionality such as PhotoPicker, Widgets, and CarPlay.
Additional information about report details and access will be available for developers in March.
Developers will have the ability to grant third-party access to their reports conveniently through the API.
More flexibility for sign in options in appsIn line with Apple’s mission to protect user privacy, Apple is updating its App Store Review Guideline for using Sign in with Apple. Sign in with Apple makes it easy for users to sign in to apps and websites using their Apple ID and was built from the ground up with privacy and security in mind. Starting today, developers that offer third-party or social login services within their app will have the option to offer Sign in with Apple, or they will now be able to offer an equivalent privacy-focused login service instead.
Update on apps distributed in the European Union
We’re sharing some changes to iOS, Safari, and the App Store, impacting developers’ apps in the European Union (EU) to comply with the Digital Markets Act (DMA). These changes create new options for developers who distribute apps in any of the 27 EU member states, and do not apply to apps distributed anywhere else in the world. These options include how developers can distribute apps on iOS, process payments, use web browser engines in iOS apps, request interoperability with iPhone and iOS hardware and software features, access data and analytics about their apps, and transfer App Store user data.
If you want nothing to change for you — from how the App Store works currently in the EU and in the rest of the world — no action is needed. You can continue to distribute your apps only on the App Store and use its private and secure In-App Purchase system.
Updated App Store Review Guidelines now available
The App Store Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification. We now also indicate which guidelines only apply to Notarization for iOS apps in the European Union.
The following guidelines have been divided into subsections for the purposes of Notarization for iOS apps in the EU:
- 2.3.1
- 2.5.16
- 4.1
- 4.3
- 4.6
- 5.1.4
- 5.2.4
The following guidelines have been deleted:
- 2.5.7
- 3.2.2(vi)
- 4.2.4
- 4.2.5
- 4.4.3
2.5.6: Added a link to an entitlement to use an alternative web browser engine in your app in the EU.
3.1.6: Moved to 4.9.
3.2.2(ii): Moved to 4.10.
4.7: Edited to set forth new requirements for mini apps, mini games, streaming games, chatbots, and plug-ins.
4.8: Edited to require an additional login service with certain privacy features if you use a third-party or social login service to set up or authenticate a user’s primary account.
4.9: The original version of this rule (Streaming games) has been deleted and replaced with the Apple Pay guideline.
5.1.2(i): Added that apps may not require users to enable system functionalities (e.g., push notifications, location services, tracking) in order to access functionality, content, use the app, or receive monetary or other compensation, including but not limited to gift cards and codes. A version of this rule was originally published as Guideline 3.2.2(vi).
After You Submit — Appeals: Edited to add an updated link for suggestions for changes to the Guidelines.
The term “auto-renewing subscriptions” was replaced with “auto-renewable subscriptions” throughout.
Translations of the guidelines will be available on the Apple Developer website within one month.
Swift Student Challenge applications open February 5
We’re so excited applications for the Swift Student Challenge 2024 will open on February 5.
Looking for some inspiration? Learn about past Challenge winners to gain insight into the motivations behind their apps.
Just getting started? Get tools, tips, and guidance on everything you need to create an awesome app playground.
“The full impact of fruit destruction”: How Halfbrick cultivated Super Fruit Ninja on Apple Vision Pro
Fruit Ninja has a juicy history that stretches back more than a decade, but Samantha Turner, lead gameplay programmer at the game’s Halfbrick Studios, says the Apple Vision Pro version — Super Fruit Ninja on Apple Arcade — is truly bananas. “When it first came out, Fruit Ninja kind of gave new life to the touchscreen,” she notes, “and I think we have the potential to do something very special here.”
What if players could squeeze juice out of an orange? What if they could rip apart a watermelon and cover the table and walls with juice?
Samantha Turner, lead gameplay programmer at Halfbrick Studios
Turner would know. She’s worked on the Fruit Ninja franchise for nearly a decade, which makes her especially well suited to help grow the game on a new platform. “We needed to understand how to bring those traditional 2D user interfaces into the 3D space,” she says. “We were full of ideas: What if players could squeeze juice out of an orange? What if they could rip apart a watermelon and cover the table and walls with juice?” She laughs, on a roll. “We were really playing with the environment.”
But they also needed to get people into that environment. “That’s where we came up with the flying menu,” she says, referring to the old-timey home screen that’ll feel familiar to Fruit Ninja fans, except for how it hovers in space. “We wanted a friendly and welcoming way to bring people into the immersive space,” explains Turner. “Before we landed on the menu, we were doing things like generating 3D text to put on virtual objects. But that didn’t give us the creative freedom we needed to set the theme for our world.”
To create Super Fruit Ninja, the Halfbrick team worked to bring “traditional 2D interfaces into the 3D space.”
That theme: The good citizens of Fruitasia have discovered a portal to our world — one that magically materializes in the room. “Sensei steps right through the portal,” says Turner, “and you can peek back into their world too.”
Next, Turner and Halfbrick set about creating a satisfying — and splashy — way for people to interact with their space. The main question: What’s the most logical way to launch fruit at people?
“We started with, OK, you have a couple meters square in front of you. What will the playspace look like? What if there’s a chair or a table in the way? How do we work around different scenarios for people in their office or living room or kitchen?” To find their answers, Halfbrick built RealityKit prototypes. “Just being able to see those really opened up the possibilities.” The answer? A set of cannons, arranged in a semicircle at the optimal distance for efficient slashing.
Instead of holding blades, you simply use your hands.
Samantha Turner, lead gameplay programmer at Halfbrick Studios
It also let them move onto the question of how players can carve up a bunch of airborne bananas in a 3D space. The team experimented with a variety of hand motions, but none felt as satisfying as the final result. “Instead of holding blades, you simply use your hands,” she says. “You become the weapon.”
And you’re a powerful weapon. Slice and dice pineapples and watermelons by jabbing with your hands. Send bombs away by pushing them to a far wall, where they harmlessly explode at a distance. Fire shuriken into floating fruit by brushing your palms in an outward direction — a motion Turner particularly likes. “It’s satisfying to see it up close, but when you see it happen far away, you get the full impact of fruit destruction,” she laughs. All were results of hand gesture explorations.
Truffles the pig awaits his reward in Super Fruit Ninja.
“We always knew hands would be the center of the experience,” she says. “We wanted players to be able to grab things and knock them away. And we can tailor the arc of the fruit to make sure it's a comfortable fruit-slicing experience — we’re actually using the vertical position of the device itself to make sure that we're not throwing fruit over your head or too low.”
The result is the most immersive — and possibly most entertaining — Fruit Ninja to date, not just for players but for the creators. “Honestly,” Turner says, “this version is one of my favorites.”
Realizing their vision: How djay designed for visionOS
Years ago, early in his professional DJ career, Algoriddim cofounder and CEO Karim Morsy found himself performing a set atop a castle tower on the Italian coast. Below him, a crowd danced in the ruins; before him streched a moonlit-drenched coastline and the Mediterranean Sea. “It was a pretty inspiring environment,” Morsy says, probably wildly underselling this.
Through their app djay, Morsy and Algoriddim have worked to recreate that live DJ experience for nearly 20 years. The best-in-class DJ app started life as boxed software for Mac; subsequent versions for iPad offered features like virtual turntables and beat matching. The app was a smashing success that won an Apple Design Award in both 2011 and 2016.
On Apple Vision Pro, djay transports people to a number of inventive immersive environments.
But Morsy says all that previous work was prologue to djay on the infinite canvas. “When we heard about Apple Vision Pro,” he says, “it felt like djay was this beast that wanted to be unleashed. Our vision — no pun intended — with Algoriddim was to make DJing accessible to everyone,” he says. Apple Vision Pro, he says, represents the realization of that dream. “The first time I experienced the device was really emotional. I wanted to be a DJ since I was a child. And suddenly here were these turntables, and the night sky, and the stars above me, and this light show in the desert. I felt like, ‘This is the culmination of everything. This is the feeling I’ve been wanting people to experience.’”
When we heard about Apple Vision Pro, it felt like djay was this beast that wanted to be unleashed.
Karim Morsy, Algoriddim cofounder and CEO
Getting to that culmination necessitated what Morsy calls “the wildest sprint of our lives.” With a 360-degree canvas to explore, the team rethought the entire process of how people interacted with djay. “We realized that with a decade of building DJ interfaces, we were taking a lot for granted,” he says. “So the first chunk of designing for Apple Vision Pro was going back to the drawing board and saying, ‘OK, maybe this made sense 10 years ago with a computer and mouse, but why do we need it now? Why should people have to push a button to match tempos — shouldn’t that be seamless?’ There was so much we could abstract away.”
Spin in a fully immersive environment, or bring your two turntables into the room with you.
They also thought about environments. djay offers a windowed view, a shared space that brings 3D turntables into your environment, and several forms of full immersion. The app first opens to the windowed view, which should feel familiar to anyone who’s spun on the iPad app: a simple UI of two decks. The volumetric view brings into your room not just turntables, but the app’s key moment: the floating 3D cube that serves as djay’s effects control pad.
But those immersive scenes are where Morsy feels people can truly experience reacting to and feeding off the environment. There’s an LED wall that reflects colors from the artwork of the currently playing song, a nighttime desert scene framed by an arena of lights, and a space lounge — complete with dancing robots — that offers a great view of planet Earth. The goal of those environments is to help create the “flow state” that’s sought by live DJs. “You want to get into a loop where the environment influences you and vice versa,” Morsy says.
From left: Algoriddim’s Karim Morsy, Frederik Seiffert, and Federico Tessmann work on updates to their app with the proper equipment.
In the end, this incredible use of technology serves a very simple purpose: interacting with the music you love. Morsy — a musician himself — points to a piano he keeps in his office. “That piano has had the same interface for hundreds of years,” he says. “That’s what we’re trying to reach, that sweet spot between complexity and ease of use. With djay on Vision Pro, it’s less about, ‘Let’s give people bells and whistles,’ and more, ‘Let’s let them have this experience.’”
Hello Developer: January 2024
Welcome to Hello Developer. In this Apple Vision Pro-themed edition: Find out how to submit your visionOS apps to the App Store, learn how the team behind djay approached designing for the infinite canvas, and get technical answers straight from Apple Vision Pro engineers. Plus, catch up on the latest news, documentation, and developer activities.
FEATURED
Submit your apps to the App Store for Apple Vision ProApple Vision Pro will have a brand-new App Store, where people can discover and download all the incredible apps available for visionOS. Whether you’ve created a new visionOS app or are making your existing iPad or iPhone app available on Apple Vision Pro, here’s everything you need to know to prepare and submit your app to the App Store.
BEHIND THE DESIGN
Realizing their vision: How djay designed for visionOSAlgoriddim CEO Karim Morsy says Apple Vision Pro represents “the culmination of everything” for his app, djay. In the latest edition of Behind the Design, find out how this incredible team approached designing for the infinite canvas.
Realizing their vision: How djay designed for visionOS View nowQ&A
Get answers from Apple Vision Pro engineersIn this Q&A, Apple Vision Pro engineers answer some of the most frequently asked questions from Apple Vision Pro developer labs all over the world.
Q&A: Building apps for visionOS View nowCOLLECTION
Reimagine your enterprise apps on Apple Vision ProDiscover the languages, tools, and frameworks you’ll need to build and test your apps for visionOS. Explore videos and resources that showcase productivity and collaboration, simulation and training, and guided work. And dive into workflows for creating or converting existing media, incorporating on-device and remote assets into your app, and much more.
Reimagine your enterprise apps on Apple Vision Pro View nowMEET WITH APPLE EXPERTS
Submit your request for developer labs and App Review consultationsJoin us this month in the Apple Vision Pro developer labs to get your apps ready for visionOS. With help from Apple, you’ll be able to test, refine, and finalize your apps and games. Plus, Apple Developer Program members can check out one-on-one App Review, design, and technology consultations, offered in English, Spanish, Brazilian Portuguese, and more.
DOCUMENTATION
Check out visionOS sample apps, SwiftUI tutorials, audio performance updates, and moreThese visionOS sample apps feature refreshed audio, visual, and timing elements, simplified collision boxes, and performance improvements.
-
Hello World: Use windows, volumes, and immersive spaces to teach people about the Earth.
-
Happy Beam: Leverage a Full Space to create a game using ARKit.
-
Diorama: Design scenes for your visionOS app using Reality Composer Pro.
-
Swift Splash: Use RealityKit to create an interactive ride in visionOS.
And these resources and updated tutorials cover iOS 17, accessibility, Live Activities, and audio performance.
-
SwiftUI Tutorials: Learn the latest best practices for iOS 17.
-
Accessibility Inspector: Review your app’s accessibility experience.
-
Starting and updating Live Activities with ActivityKit push notifications: Use push tokens to update and end Live Activities.
-
Analyzing audio performance with Instruments: Ensure a smooth and immersive audio experience using Audio System Trace.
View the full list of new resources.
Discover what’s new in the Human Interface Guidelines.
NEWS
Catch up on the latest updates-
Announcing contingent pricing: Give customers discounted pricing when they’re subscribed to a different subscription on the App Store.
-
Updated agreements and guidelines now available: Check out the latest changes that have been made to support updated policies and provide clarification.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Q&A: Building apps for visionOS
Over the past few months, Apple experts have fielded questions about visionOS in Apple Vision Pro developer labs all over the world. Here are answers to some of the most frequent questions they’ve been asked, including insights on new concepts like entities, immersive spaces, collision shapes, and much more.
How can I interact with an entity using gestures?There are three important pieces to enabling gesture-based entity interaction:
- The entity must have an InputTargetComponent. Otherwise, it won’t receive gesture input at all.
- The entity must have a CollisionComponent. The shapes of the collision component define the regions that gestures can actually hit, so make sure the collision shapes are specified appropriately for interaction with your entity.
- The gesture that you’re using must be targeted to the entity you’re trying to interact with (or to any entity). For example:
private var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { gestureValue in
let tappedEntity = gestureValue.entity
print(tappedEntity.name)
}
}
It’s also a good idea to give an interactive entity a HoverEffectComponent, which enables the system to trigger a standard highlight effect when the user looks at the entity.
Should I use a window group, an immersive space, or both?Consider the technical differences between windows, volumes, and immersive spaces when you decide which scene type to use for a particular feature in your app.
Here are some significant technical differences that you should factor into your decision:
- Windows and volumes from other apps the user has open are hidden when an immersive space is open.
- Windows and volumes clip content that exceeds their bounds.
- Users have full control over the placement of windows and volumes. Apps have full control over the placement of content in an immersive space.
- Volumes have a fixed size, windows are resizable.
- ARKit only delivers data to your app if it has an open immersive space.
Explore the Hello World sample code to familiarize yourself with the behaviors of each scene type in visionOS.
How can I visualize collision shapes in my scene?Use the Collision Shapes debug visualization in the Debug Visualizations menu, where you can find several other helpful debug visualizations as well. For information on debug visualizations, check out Diagnosing issues in the appearance of a running app.
Can I position SwiftUI views within an immersive space?Yes! You can position SwiftUI views in an immersive space with the offset(x:y:) and offset(z:) methods. It’s important to remember that these offsets are specified in points, not meters. You can utilize PhysicalMetric to convert meters to points.
What if I want to position my SwiftUI views relative to an entity in a reality view?Use the RealityView attachments API to create a SwiftUI view and make it accessible as a ViewAttachmentEntity. This entity can be positioned, oriented, and scaled just like any other entity.
RealityView { content, attachments in
// Fetch the attachment entity using the unique identifier.
let attachmentEntity = attachments.entity(for: "uniqueID")!
// Add the attachment entity as RealityView content.
content.add(attachmentEntity)
} attachments: {
// Declare a view that attaches to an entity.
Attachment(id: "uniqueID") {
Text("My Attachment")
}
}
Can I position windows programmatically?
There’s no API available to position windows, but we’d love to know about your use case. Please file an enhancement request. For more information on this topic, check out Positioning and sizing windows.
Is there any way to know what the user is looking at?As noted in Adopting best practices for privacy and user preferences, the system handles camera and sensor inputs without passing the information to apps directly. There's no way to get precise eye movements or exact line of sight. Instead, create interface elements that people can interact with and let the system manage the interaction. If you have a use case that you can't get to work this way, and as long as it doesn't require explicit eye tracking, please file an enhancement request.
When are the onHover and onContinuousHover actions called on visionOS?The onHover and onContinuousHover actions are called when a finger is hovering over the view, or when the pointer from a connected trackpad is hovering over the view.
Can I show my own immersive environment textures in my app?If your app has an ImmersiveSpace open, you can create a large sphere with an UnlitMaterial and scale it to have inward-facing geometry:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
do {
// Create the sphere mesh.
let mesh = MeshResource.generateSphere(radius: 10)
// Create an UnlitMaterial.
var material = UnlitMaterial(applyPostProcessToneMap: false)
// Give the UnlitMaterial your equirectangular color texture.
let textureResource = try await TextureResource(named: "example")
material.color = .init(tint: .white, texture: .init(textureResource))
// Create the model.
let entity = ModelEntity(mesh: mesh, materials: [material])
// Scale the model so that it's mesh faces inward.
entity.scale.x *= -1
content.add(entity)
} catch {
// Handle the error.
}
}
}
}
I have existing stereo videos. How can I convert them to MV-HEVC?
AVFoundation provides APIs to write videos in MV-HEVC format. For a full example, download the sample code project Converting side-by-side 3D video to multiview HEV.
To convert your videos to MV-HEVC:
- Create an AVAsset for each of the left and right views.
- Use AVOutputSettingsAssistant to get output settings that work for MV-HEVC.
- Specify the horizontal disparity adjustment and field of view (this is asset specific). Here’s an example:
var compressionProperties = outputSettings[AVVideoCompressionPropertiesKey] as! [String: Any]
// Specifies the parallax plane.
compressionProperties[kVTCompressionPropertyKey_HorizontalDisparityAdjustment as String] = horizontalDisparityAdjustment
// Specifies the horizontal FOV (90 degrees is chosen in this case.)
compressionProperties[kCMFormatDescriptionExtension_HorizontalFieldOfView as String] = horizontalFOV
- Create an AVAssetWriterInputTaggedPixelBufferGroupAdaptor as the input for your AVAssetWriter.
- Create an AVAssetReader for each of the left and right video tracks.
- Read the left and right tracks, then append matching samples to the tagged pixel buffer group adaptor:
// Create a tagged buffer for each stereoView.
let taggedBuffers: [CMTaggedBuffer] = [
.init(tags: [.videoLayerID(0), .stereoView(.leftEye)], pixelBuffer: leftSample.imageBuffer!),
.init(tags: [.videoLayerID(1), .stereoView(.rightEye)], pixelBuffer: rightSample.imageBuffer!)
]
// Append the tagged buffers to the asset writer input adaptor.
let didAppend = adaptor.appendTaggedBuffers(taggedBuffers,
withPresentationTime: leftSample.presentationTimeStamp)
How can I light my scene in RealityKit on visionOS?
You can light your scene in RealityKit on visionOS by:
- Using a system-provided automatic lighting environment that updates based on real-world surroundings.
- Providing your own image-based lighting via an ImageBasedLightComponent. To see an example, create a new visionOS app, select RealityKit as the Immersive Space Renderer, and select Full as the Immersive Space.
You can create materials with custom shading in Reality Composer Pro using the Shader Graph. A material created this way is accessible to your app as a ShaderGraphMaterial, so that you can dynamically change inputs to the shader in your code.
For a detailed introduction to the Shader Graph, watch Explore materials in Reality Composer Pro.
How can I position entities relative to the position of the device?In an ImmersiveSpace, you can get the full transform of the device using the queryDeviceAnchor(atTimestamp:) method.
Learn more about building apps for visionOS Q&A: Spatial design for visionOS View now Spotlight on: Developing for visionOS View now Spotlight on: Developer tools for visionOS View nowSample code contained herein is provided under the Apple Sample Code License.
Submit your apps to the App Store for Apple Vision Pro
Apple Vision Pro will have a brand-new App Store, where people can discover and download incredible apps for visionOS. Whether you’ve created a new visionOS app or are making your existing iPad or iPhone app available on Apple Vision Pro, here’s everything you need to know to prepare and submit your app to the App Store.
Updated Apple Developer Program License Agreement now available
The Apple Developer Program License Agreement has been revised to support updated policies and provide clarification. The revisions include:
-
Definitions, Section 3.3.3(N): Updated "Tap to Present ID" to "ID Verifier"
-
Definitions, Section 14.10: Updated terms regarding governing law and venue
-
Section 3.3: Reorganized and categorized provisions for clarity
-
Section 3.3.3(B): Clarified language on privacy and third-party SDKs
-
Section 6.7: Updated terms regarding analytics
-
Section 12: Clarified warranty disclaimer language
-
Attachment 1: Updated terms for use of Apple Push Notification Service and Local Notifications
-
Attachment 9: Updated terms for Xcode Cloud compute hours included with Apple Developer Program membership
Announcing contingent pricing for subscriptions
Contingent pricing for subscriptions on the App Store — a new feature that helps you attract and retain subscribers — lets you give customers a discounted subscription price as long as they’re actively subscribed to a different subscription. It can be used for subscriptions from one developer or two different developers. We’re currently piloting this feature and will be onboarding more developers in the coming months. If you’re interested in implementing contingent pricing in your app, you can start planning today and sign up to get notified when more details are available in January.
Get ready with the latest beta releases
The beta versions of iOS 17.3, iPadOS 17.3, macOS 14.3, tvOS 17.3, and watchOS 10.3 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.2 beta.
Hello Developer: December 2023
Welcome to Hello Developer. In this edition: Check out new videos on Game Center and the Journaling Suggestions API, get visionOS guidance straight from the spatial design team, meet three App Store Award winners, peek inside the time capsule that is Ancient Board Game Collection, and more.
VIDEOS
Manage Game Center with the App Store Connect APIIn this new video, discover how you can use the App Store Connect API to automate your Game Center configurations outside of App Store Connect on the web.
Manage Game Center with the App Store Connect API Watch nowAnd find out how the new Journaling Suggestions API can help people reflect on the small moments and big events in their lives through your app — all while protecting their privacy.
Discover the Journaling Suggestions API Watch nowQ&A
Get your spatial design questions answeredWhat’s the best way to make a great first impression in visionOS? What’s a “key moment”? And what are some easy methods for making spatial computing visual design look polished? Get answers to these questions and more.
Q&A: Spatial design for visionOS View nowFEATURED
Celebrate the winners of the 2023 App Store AwardsEvery year, the App Store celebrates exceptional apps that improve people’s lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. Find out how the winning teams behind Finding Hannah, Photomator, and Unpacking approached their incredible work this year.
“We’re trying to drive change": Meet three App Store Award-winning teams View nowMissed the big announcement? Check out the full list of 2023 winners.
NEWS
Xcode Cloud now included with membershipStarting January 2024, all Apple Developer Program memberships will include 25 compute hours per month on Xcode Cloud as a standard, with no additional cost. Learn more.
BEHIND THE DESIGN
Travel back in time with Ancient Board Game CollectionKlemens Strasser’s Ancient Board Game Collection blends the new and the very, very old. Its games date back centuries: Hnefatafl is said to be nearly 1,700 years old, while the Italian game Latrunculi is closer to 2,000. “I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole,” Strasser says. Find out how the Austria-based developer and a team of international artists gave these ancient games new life.
With Ancient Board Game Collection, Klemens Strasser goes back in time View nowDOCUMENTATION
Get creative with 3D immersion, games, SwiftUI, and moreThis month’s new sample code, tutorials, and documentation cover everything from games to passing control between apps to addressing reasons for common crashes. Here are a few highlights:
-
Game Center matchmaking essentials, rules, and testing: Learn how to create custom matchmaking rules for better matches between players and test the rules before applying them.
-
Incorporating real-world surroundings in an immersive experience: This sample code project helps you use scene reconstruction in ARKit to give your app an idea of the shape of the person’s surroundings and to bring your app experience into their world.
-
Creating a macOS app: Find out how to bring your SwiftUI app to macOS, including adding new views tailored to macOS and modifying others to work better across platforms.
-
Creating a watchOS app: Find out how to bring your SwiftUI app to watchOS, including customizing SwiftUI views to display the detail and list views on watchOS.
View the full list of new resources.
View what’s new in the Human Interface Guidelines.
NEWS
Catch up on the latest updates-
App Store holiday schedule: We’ll remain open throughout the holiday season and look forward to accepting your submissions. However, reviews may take a bit longer to complete from December 22 to 27.
-
Sandbox improvements: Now you can change a test account’s storefront, adjust subscription renewal rates, clear purchase history, simulate interrupted purchase flows directly on iPhone or iPad, and test Family Sharing.
-
New software releases: Build your apps using the latest developer tools and test them on this week’s OS releases. Download Xcode 15.1 RC, and the RC versions of iOS 17.2, iPadOS 17.2, macOS 14.2, tvOS 17.2, and watchOS 10.2.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Q&A: Spatial design for visionOS
Spatial computing offers unique opportunities and challenges when designing apps and games. At WWDC23, the Apple design team hosted a wide-ranging Q&A to help developers explore designing for visionOS. Here are some highlights from that conversation, including insights on the spectrum of immersion, key moments, and sound design.
What’s the best way to make a great first impression on this platform?While it depends on your app, of course, starting in a window is a great way to introduce people to your app and let them control the amount of immersion. We generally recommend not placing people into a fully immersive experience right away — it’s better to make sure they’re oriented in your app before transporting them somewhere else.
What should I consider when bringing an existing iPadOS or iOS app to visionOS?Think about a key moment where your app would really shine spatially. For example, in the Photos app for visionOS, opening a panoramic photo makes the image wrap around your field of view. Ask yourself what that potential key moment — an experience that isn’t bound by a screen — is for your app.
From a more tactical perspective, consider how your UI will need to be optimized for visionOS. To learn more, check out “Design for spatial user interfaces”.
Design for spatial user interfaces Watch now Can you say a bit more about what you mean by a “key moment”?A key moment is a feature or interaction that takes advantage of the unique capabilities of visionOS. (Think of it as a spatial or immersive highlight in your app.) For instance, if you’re creating a writing app, your key moment might be a focus mode in which you immerse someone more fully in an environment or a Spatial Audio soundscape to get them in the creative zone. That’s just not possible on a screen-based device.
I often use a grid system when designing for iOS and macOS. Does that still apply here?Definitely! The grid can be very useful for designing windows, and point sizes translate directly between platforms. Things can get more complex when you’re designing elements in 3D, like having nearby controls for a faraway element. To learn more, check out “Principles of spatial design.”
Principles of spatial design Watch now What’s the best way to test Apple Vision Pro experiences without the device?You can use the visionOS simulator in Xcode to recreate system gestures, like pinch, drag, tap, and zoom.
What’s the easiest way to make my spatial computing design look polished?As a starting point, we recommend using the system-provided UI components. Think about hover shapes, how every element appears by default, and how they change when people look directly at them. When building custom components or larger elements like 3D objects, you'll also need to customize your hover effects.
What interaction or ergonomic design considerations should I keep in mind when designing for visionOS?Comfort should guide experiences. We recommend keeping your main content in the field of view, so people don't need to move their neck and body too much. The more centered the content is in the field of view, the more comfortable it is for the eyes. It's also important to consider how you use input. Make sure you support system gestures in your app so people have the option to interact with content indirectly (using their eyes to focus an element and hand gestures, like a pinch, to select). For more on design considerations, check out “Design considerations for vision and motion.”
Design considerations for vision and motion Watch now Are there design philosophies for fully immersive experiences? Should the content wrap behind the person’s head, above them, and below them?Content can be placed anywhere, but we recommend providing only the amount of immersion needed. Apps can create great immersive experiences without taking over people's entire surroundings. To learn more, check out the Human Interface Guidelines.
Human Interface Guidelines: Immersive experiences
Are there guidelines for creating an environment for a fully immersive experience?First, your environment should have a ground plane under the feet that aligns with the real world. As you design the specifics of your environment, focus on key details that will create immersion. For example, you don't need to render all the details of a real theater to convey the feeling of being in one. You can also use subtle motion to help bring an environment to life, like the gentle movement of clouds in the Mount Hood environment.
What else should I consider when designing for spatial computing?Sound design comes to mind. When designing for other Apple platforms, you may not have placed as much emphasis on creating audio for your interfaces because people often mute sounds on their devices (or it's just not desirable for your current experience). With Apple Vision Pro, sound is crucial to creating a compelling experience.
People are adept at understanding their surroundings through sound, and you can use sound in your visionOS app or game to help people better understand and interact with elements around them. When someone presses a button, for example, an audio cue helps them recognize and confirm their actions. You can position sound spatially in visionOS so that audio comes directly from the item a person interacts with, and the system can use their surroundings to give it the appropriate reverberation and texture. You can even create spatial soundscapes for scenes to make them feel more lifelike and immersive.
For more on designing sound for visionOS, check out “Explore immersive sound design.”
Explore immersive sound design Watch now Learn moreFor even more on designing for visionOS, check out more videos, the Human Interface Guidelines, and the Apple Developer website.
Develop your first immersive app Watch now Get started with building apps for spatial computing Watch now Build great games for spatial computing Watch now“We’re trying to drive change": Meet three App Store Award-winning teams
Every year, the App Store Awards celebrate exceptional apps that improve people’s lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact.
This year’s winners were drawn from a list of 40 finalists that included everything from flight trackers to retro games to workout planners to meditative puzzles. In addition to exhibiting an incredible variety of approaches, styles, and techniques, these winners shared a thoughtful grasp and mastery of Apple tools and technologies.
Meet the winners and finalists of the 2023 App Store Awards
For the team behind the hidden-object game Finding Hannah, their win for Cultural Impact is especially meaningful. “We’re trying to drive change on the design level by bringing more personal stories to a mainstream audience,” says Franziska Zeiner, cofounder and managing director of the Fein Games studio, from her Berlin office. “Finding Hannah is a story that crosses three generations, and each faces the question: How truly free are we as women?”
Finding Hannah’s story is driven by quiet, meaningful interactions between the main character, her mother, and her grandmother.
The Hannah of Finding Hannah is a 39-year-old Berlin resident trying to navigate a career, relationships (including with her best friend/ex, Emma), and the meaning of true happiness. Players complete a series of found-object puzzles that move along the backstory of Hannah’s mother and grandmother to add a more personal touch to the game.
We’re trying to drive change on the design level by bringing more personal stories to a mainstream audience.
Franziska Zeiner, Fein Games co-founder and managing director
To design the art for the game’s different time periods, the team tried a different approach. “We wanted an art style that was something you’d see more on social media than in games,” says Zeiner. “The idea was to try to reach people who weren’t gamers yet, and we thought we’d most likely be able to do that if we found a style that hadn’t been seen in games before. And I do think that added a new perspective, and maybe helped us stand out a little bit.”
Learn more about Finding Hannah
Download Finding Hannah from the App Store
Pixelmator, the team behind Mac App of the Year winner Photomator, is no stranger to awards consideration, having received multiple Apple Design Awards in addition to their 2023 App Store Award. The latter is especially meaningful for the Lithuania-based team. “We’re still a Mac-first company,” says Simonas Bastys, lead developer of the Pixelmator team. “For what we do, Mac adds so many benefits to the user experience.”
Photomator’s Smart Deband feature is just one of its many powerful features on Mac.
To start adding Photomator to their portfolio of Mac apps back in 2020, Bastys and his team of engineers decided against porting over their UIKit and AppKit code. Instead, they set out to build Photomator specifically for Mac with SwiftUI. “We had a lot of experience with AppKit,” Bastys says, “but we chose to transition to SwiftUI to align with cutting-edge, future-proof technologies.”
The team zeroed in on maximizing performance, assuming that people would need to navigate and manipulate large libraries. They also integrated a wealth of powerful editing tools, such as repairing, debanding, batch editing, and much more. Deciding what to work on — and what to prioritize — is a constant source of discussion. “We work on a lot of ideas in parallel,” Bastys says, “and what we prioritize comes up very naturally, based on what’s ready for shipment and what new technology might be coming.” This year, that meant a focus on HDR.
We had a lot of experience with AppKit, but we wanted to create with native Mac technologies.
Simonas Bastys, lead developer of the Pixelmator team
How does Bastys and the Pixelmator team keep growing after so long? “This is the most exciting field in computer science to me,” says Bastys. “There’s so much to learn. I’m only now starting to even understand the depth of human vision and computer image processing. It’s a continuous challenge. But I see endless possibilities to make Photomator better for creators.”
Download Photomator from the Mac App Store
To create the Cultural Impact winner Unpacking, the Australian duo of creative director Wren Brier and technical director Tim Dawson drew on more than a decade of development experience. Their game — part zen puzzle, part life story — follows a woman through the chapters of her life as she moves from childhood bedroom to first apartment and beyond. Players solve puzzles by placing objects around each new dwelling while learning more about her history with each new level — something Brier says is akin to a detective story.
“You have this series of places, and you’re opening these hints, and you’re piecing together who this person is,” she says from the pair’s home in Brisbane.
Brier and Dawson are partners who got the idea for Unpacking from — where else? — one of their own early moves. “There was something gamelike about the idea of finishing one box to unlock the one underneath,” Brier says. “You’re completing tasks, placing items together on shelves and in drawers. Tim and I started to brainstorm the game right away.”
Unpacking has no visible characters and no dialogue. Its emotionally rich story is told entirely through objects in boxes.
While the idea was technically interesting, says Dawson, the pair was especially drawn to the idea of unpacking as a storytelling vehicle. “This is a really weird example,” laughs Dawson, “but there’s a spatula in the game. That’s a pretty normal household item. But what does it look like? Is it cheap plastic, something that maybe this person got quickly? Is it damaged, like they’ve been holding onto it for a while? Is it one of those fancy brands with a rubberized handle? All of that starts painting a picture. It becomes this really intimate way of knowing a character.”
There was something game-like about the idea of finishing one box to unlock the one underneath.
Wren Brier, Unpacking creative director
Those kinds of discussions — spatula-based and otherwise — led to a game that includes novel uses of technology, like the haptic feedback you get when you shake a piggy bank or board game. But its diverse, inclusive story is the reason behind its App Store Award nod for Cultural Impact. Brier and Dawson say players of all ages and backgrounds have shared their love of the game, drawn by the universal experience of moving yourself, your belongings, and your life into a new home. “One guy even sent us a picture of his bouldering shoes and told us they were identical to the ones in the game,” laughs Brier. “He said, ‘I have never felt so seen.’”
With Ancient Board Game Collection, Klemens Strasser goes back in time
Klemens Strasser will be the first to tell you that prior to launching his Ancient Board Game Collection, he wasn’t especially skilled at Hnefatafl. “Everybody knows chess and everybody knows backgammon,” says the indie developer from his home office in Austria, “but, yeah, I didn’t really know that one.”
Today, Strasser runs what may well be the hottest Hnefatafl game in town. The Apple Design Award finalist for Inclusivity Ancient Board Game Collection comprises nine games that reach back not years or decades but centuries — Hnefatafl (or Viking chess) is said to be nearly 1,700 years old, while the Italian game Latrunculi is closer to 2,000. And while games like Konane, Gomoku, and Five Field Kono might not be household names, Strasser’s collection gives them fresh life through splashy visuals, a Renaissance faire soundtrack, efficient onboarding, and even a bit of history.
At roughly 1,700 years old, Hnefatafl is one of the more ancient titles in Klemens Strasser’s Ancient Board Game Collection.
Strasser is a veteran of Flexibits (Fantastical, Cardhop) and the developer behind such titles as Letter Rooms, Subwords and Elementary Minute (for which he won a student Apple Design Award in 2015). But while he was familiar with Nine Men’s Morris — a game popular in Austria he’d play with his grandma — he wasn’t exactly well versed in third-century Viking pastimes until a colleague brought Hnefatafl to his attention three years ago. “It was so different than the traditional symmetric board games I knew,” he says. “I really fell in love with it.”
Less appealing were mobile versions of Hnefatafl, which Strasser found lacking. “The digital versions of many board games have a certain design,” he says. “It’s usually pretty skeuomorphic, with a lot of wood and felt and stuff like that. That just didn’t make me happy. And I thought, ‘Well, if I can’t find one I like, I’ll build it.’”
I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole.
Klemens Strasser
Using SpriteKit, Strasser began mocking up an iOS Hnefatafl prototype in his downtime. A programmer by trade — “I’m not very good at drawing stuff,” he demurs — Strasser took pains to keep his side project as simple as possible. “I always start with minimalistic designs for my games and apps, but these are games you play with some stones and maybe a piece of paper,” he laughs. “I figured I could build that myself.”
His Hnefatafl explorations came surprisingly fast — enough so that he started wondering what other long-lost games might be out there. “I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole,” Strasser laughs. “I kept saying, ‘Oh, that’s an interesting game, and that’s also an interesting game, and that’s another interesting game.’” Before he knew it, his simple Hnefatafl mockup had become a buffet of games. “And I still have a list of like 20 games I’d still like to digitize,” he says.
Italian designer Carmine Acierno brought a mosaic-inspired design to Nine Men’s Morris.
For the initial designs of his first few games, Strasser tried to maintain the simple style of his Hnefatafl prototype. “But I realized that I couldn’t really represent the culture and history behind each game in that way,” he says, “so I hired people who live where the games are from.”
That’s where Ancient Board Game Collection really took off. Strasser began reaching out to artists from each ancient game’s home region — and the responses came fast. Out went the minimalist version of Ancient Board Game Collection, in came a richer take, powered by a variety of cultures and design styles. For Hnefatafl, Strasser made a fortuitous connection with Swedish designer Albina Lind. “I sent her a few images of like Vikings and runestones, and in two hours she came up with a design that was better than anything I could have imagined,” he says. “If I hadn’t run into her, I might not have finished the project. But it was so perfect that I had to continue.”
Stockholm-based artist Albina Lind leapt right into designing Hnefatafl. “I instantly thought, ‘Well, this is my cup of tea,’” she says.
Lind was a wise choice. The Stockholm-based freelance artist had nearly a decade of experience designing games, including her own Norse-themed adventure, Dragonberg. “I instantly thought, ‘Well, this is my cup of tea,’” Lind says. Her first concept was relatively realistic, all dark wood and stone textures, before she settled on a more relaxed, animation-inspired style. “Sometimes going unreal, going cartoony, is even more work than being realistic,” she says with a laugh. Lind went on to design two additional ancient games: Dablot, the exact origins of which aren’t known but it which first turned up in 1892, and Halatafl, a 14th century game of Scandinavian origin.
Work arrived from around the globe. Italian designer Carmine Acierno contributed a mosaic-inspired version of Nine Men’s Morris; Honolulu-based designer Anna Fujishige brought a traditional Hawaiian flavor to Konane. And while the approach succeeded in preserving more of each game’s authentic heritage, it did mean iterating with numerous people over numerous emails. One example: Tokyo-based designer Yosuke Ando pitched changing Strasser’s initial designs for the Japanese game Gomoku altogether. “Klemens approached me initially with the idea of the game design to be inspired by ukiyo-e (paintings) and musha-e (woodblocks prints of warriors),” Ando says. “Eventually, we decided to focus on samurai warrior armor from musha-e, deconstructing it, and simplifying these elements into the game UI.”
Honolulu-based designer Anna Fujishige brought a traditional Hawaiian flavor to Konane (at left), while the Tokyo designer Yosuke Ando’s ideas for Gomoku were inspired by samurai warrior armor.
While the design process continued, Strasser worked on an onboarding strategy — times nine. As you might suspect, it can be tricky to explain the rules and subtleties of 500-year-old games from lost civilizations, and Strasser’s initial approach — walkthroughs and puzzles designed to teach each game step by step — quickly proved unwieldy. So he went in the other direction, concentrating on writing “very simple, very understandable” rules with short gameplay animations that can be accessed at any time. “I picked games that could be explained in like three or four sentences,” he says. “And I wanted to make sure it was all accessible via VoiceOver.”
Strasser designed every part of Ancient Board Game Collection with accessibility in mind.
In fact, accessibility remained a priority throughout the entire project. (He wrote his master’s thesis on accessibility in Unity games.) As an Apple Design Award finalist for Inclusivity, Ancient Board Game Collection shines with best-in-class VoiceOver adoption, as well as support for Reduce Motion, Dynamic Type, and high-contrast game boards. “It’s at least some contribution to making everything better for everyone,” he says.
I picked games that could be explained in like three or four sentences. And I wanted to make sure it was all accessible via VoiceOver.
Klemens Strasser
Ancient Board Game Collection truly is for everyone, and it’s hardly hyperbole to call it a novel way to introduce games like Hnefatafl to a whole new generation of players. “Most people,” he says, “are just surprised that they’ve never heard of these games.”
Learn more about Ancient Board Game Collection
Download Ancient Board Game Collection from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
25 hours of Xcode Cloud now included with the Apple Developer Program
Xcode Cloud, the continuous integration and delivery service built into Xcode, accelerates the development and delivery of high-quality apps. It brings together cloud-based tools that help you build apps, run automated tests in parallel, deliver apps to testers, and view and manage user feedback.
We’re pleased to announce that as of January 2024, all Apple Developer Program memberships will include 25 compute hours per month on Xcode Cloud as a standard, with no additional cost. If you’re already subscribed to Xcode Cloud for free, no additional action is required on your part. And if you haven’t tried Xcode Cloud yet, now is the perfect time to start building your app for free in just a few minutes.
Privacy updates for App Store submissions
Third-party SDK privacy manifest and signatures. Third-party software development kits (SDKs) can provide great functionality for apps; they can also have the potential to impact user privacy in ways that aren’t obvious to developers and users. As a reminder, when you use a third-party SDK with your app, you are responsible for all the code the SDK includes in your app, and need to be aware of its data collection and use practices.
At WWDC23, we introduced new privacy manifests and signatures for SDKs to help app developers better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users. Starting in spring 2024, if your new app or app update submission adds a third-party SDK that is commonly used in apps on the App Store, you’ll need to include the privacy manifest for the SDK. Signatures are also required when the SDK is used as a binary dependency. This functionality is a step forward for all apps, and we encourage all SDKs to adopt it to better support the apps that depend on them.
Learn more and view list of commonly-used third-party SDKs
New use cases for APIs that require reasons. When you upload a new app or app update to App Store Connect that uses an API (including from third-party SDKs) that requires a reason, you’ll receive a notice if you haven’t provided an approved reason in your app’s privacy manifest. Based on the feedback we received from developers, the list of approved reasons has been expanded to include additional use cases. If you have a use case that directly benefits users that isn’t covered by an existing approved reason, submit a request for a new reason to be added.
Starting in spring 2024, in order to upload your new app or app update to App Store Connect, you’ll be required to include an approved reason in the app’s privacy manifest which accurately reflects how your app uses the API.
New design and technology consultations now available
Have questions on designing your app or implementing a technology? We’re here to help you find answers, no matter where you are in your development journey. One-on-one consultations with Apple experts in December — and newly published dates in January — are available now.
We’ll have lots more consultations and other activities in store for 2024 — online, in person, and in multiple languages.
Get your apps ready for the holidays
The busiest season on the App Store is almost here! Make sure your apps and games are up to date and ready in advance of the upcoming holidays. We’ll remain open throughout the season and look forward to accepting your submissions. On average, 90% of submissions are reviewed in less than 24 hours. However, reviews may take a bit longer to complete from December 22 to 27.
App Store Award winners announced
Join us in celebrating the work of these outstanding developers from around the world!
App Store Award finalists announced
Every year, the App Store celebrates exceptional apps that improve people’s lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. This year we’re proud to recognize nearly 40 outstanding finalists. Winners will be announced in the coming weeks.
PTC is uniting the makers
APPLE VISION PRO APPS FOR ENTERPRISE
PTC’s CAD products have been at the forefront of the engineering industry for more than three decades. And the company’s AR/VR CTO, Stephen Prideaux-Ghee, has too. “I’ve been doing VR for 30 years, and I’ve never had this kind of experience before,” he says. “I almost get so blasé about VR. But when I had [Apple Vision Pro] on, walking around digital objects and interacting with others in real time — it’s one of those things that makes you stop in your tracks."
Prideaux-Ghee says Apple Vision Pro offers PTC an opportunity to bring together components of the engineering and manufacturing process like never before. “Our customers either make stuff, or they make the machines that help somebody else make stuff,” says Prideaux-Ghee. And that stuff can be anything from chairs to boats to spaceships. “I can almost guarantee that the chair you’re sitting on is made by one of our customers,” he says.
As AR/VR CTO (which he says means “a fancy title for somebody who comes up with crazy ideas and has a reasonably good chance of implementing them”), Prideaux-Ghee describes PTC’s role as the connective tissue between the multiple threads of production. “When you’ve got a big, international production process, it's not always easy for the people involved to talk to each other. Our thought was: ‘Hey, we’re in the middle of this, so let’s come up with a simple mechanism that allows everyone to do so.’”
I’ve been doing VR for 30 years, and I’ve never had this kind of experience before.
Stephen Prideaux-Ghee, AR/VR CTO of PTC
For PTC, it’s all about communication and collaboration. “You can be a single user and get a lot of value from our app,” says Prideaux-Ghee, “but it really starts when you have multiple people collaborating, either in the same room or over FaceTime and SharePlay.” He speaks from experience; PTC has tested its app with everyone in the same space, and spread out across different countries.
"It enables some really interesting use cases, especially with passthrough," says Prideaux-Ghee. "You can use natural human interactions with a remote device."
Development is going fast. In recent weeks, PTC completed a prototype in which changes made on their iPad CAD software immediately reflect in Apple Vision Pro. “Before, we weren’t able to drive from the CAD software,” he explains. “Now, one person can run our CAD software pretty much unmodified and another can see changes instantly in 3D, at full scale. It’s really quite magical.”
Read moreBusinesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible.
JigSpace is in the driver’s seat View nowOptimize your game for Apple platforms
In this series of videos, you can learn how to level up your pro app or game by harnessing the speed and power of Apple platforms. We’ll discover GPU advancements, explore new Metal profiling tools for M3 and A17 Pro, and share performance best practices for Metal shaders.
Explore GPU advancements in M3 and A17 Pro Watch now Discover new Metal profiling tools for M3 and A17 Pro Watch now Learn performance best practices for Metal shaders Watch nowNew to developing games for Apple platforms? Familiarize yourself with the tools and technologies you need to get started.
JigSpace is in the driver’s seat
APPLE VISION PRO APPS FOR ENTERPRISE
It’s one of the most memorable images from JigSpace’s early Apple Vision Pro explorations: A life-size Alfa Romeo C43 Formula 1 car, dark cherry red, built to scale, reflecting light from all around, and parked right in the room. The camera pans back over the car’s front wings; a graceful animation shows airflow over the wings and body.
Numa Bertron, cofounder and chief technology officer for JigSpace — the creative and collaborative company that partnered with Alfa Romeo for the model — has been in the driver’s seat for the project from day one and still wasn’t quite prepared to see the car in the spatial environment. “The first thing everyone wanted to do was get in,” he says. “Everyone was stepping over the side to get in, even though you can just, you know, walk through.”
The F1 car is just one component of JigSpace’s grand plans for visionOS. The company is leaning on the new platform to create avenues of creativity and collaboration never before possible.
Bertron brings up one of JigSpace’s most notable “Jigs” (the company term for spatial presentations): an incredibly detailed model of a jet engine. “On iPhone, it’s an AR model that expands and looks awesome, but it’s still on a screen,” he explains. On Apple Vision Pro, that engine becomes a life-size piece of roaring, spinning machinery — one that people can walk around, poke through, and explore in previously unimaginable detail.
“One of our guys is a senior 3D artist,” says Bertron, “and the first time he saw one of his models in space at scale — and walked around it with his hands free — he actually cried.”
We made that F1 Jig with tools everyone can use.
Numa Bertron, JigSpace cofounder and chief technology officer
Getting there required some background learning. Prior to developing for visionOS, Bertron had no experience with SwiftUI. “We’d never gone into Xcode, so we started learning SwiftUI and RealityKit. Honestly, we expected some pain. But since everything is preset, we had really nice rounded corners, blur effects, and smooth scrolling right off the bat.”
JigSpace is designing a “full-on collaboration platform,” says Bertron.
For people who’ve used JigSpace on iOS, the visionOS version will look familiar but feel quite different. “We asked ourselves: What's the appropriate size for an object in front of you?” asks Bertron. “What’s comfortable? Will that model be on the table or on the floor? Spatial computing introduces so many more opportunities — and more decisions.”
In the case of the F1 example, it also offers a chance to level up visually. “For objects that big, we’d never been able to achieve this level of fidelity on smaller devices, so we always had to compromise,” says Bertron. In visionOS, they were free to keep adding. “We’d look at a prototype and say, ‘Well, this still runs, so let’s double the size of the textures and add more screws and more effects!’” (It’s not just about functionality, but fun as well. You can remove a piece of the car — like a full-sized tire — and throw it backwards over your head.)
The incredible visual achievement is matched by new powers of collaboration. “If I point at the tire, the other person sees me, no matter where they are,” says Bertron. “I can grab the wheel and give it to them. I can circle something we need to fix, I can leave notes or record audio. It’s a full-on collaboration platform.” And it’s also for everyone, not just F1 drivers and aerospace engineers. “We made that F1 Jig with tools everyone can use.”
Download JigSpace from the App Store
Read moreBusinesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible.
PTC is uniting the makers View nowThe “sweet, creative” world of Kimono Cats
Games simply don’t get much cuter than Kimono Cats, a casual cartoon adventure about two cats on a date (awww) that creator Greg Johnson made as a present for his wife. “I wanted to make a game she and I could play together,” says the Maui-based indie developer, “and I wanted it to be sweet, creative, and romantic.”
Kimono Cats is all three, and it’s also spectacularly easy to play and navigate. This Apple Design Award finalist for Interaction in games is set in a Japanese festival full of charming mini-games — darts, fishing, and the like — that are designed for maximum simplicity and casual fun. Players swipe up to throw darts at balloons that contain activities, rewards, and sometimes setbacks that threaten to briefly derail the date. Interaction gestures (like scooping fish) are simple and rewarding, and the gameplay variation and side activities (like building a village for your feline duo) fit right in.
“I wanted something sweet, creative, and romantic,” says Kimono Cats developer Greg Johnson.
“I’m a huge fan of Hayao Miyazaki and that kind of heartfelt, slower-paced style,” says Johnson. “What you see in Kimono Cats is a warmth and appreciation for Japanese culture.”
You also see a game that’s a product of its environment. Johnson’s been creating games since 1983 and is responsible for titles like Starfight, ToeJam and Earl, Doki-Doki Universe, and many more. His wife, Sirena, is a builder of model houses — miniature worlds not unlike the village in Kimono Cats. And the game’s concept was a reaction to the early days of COVID-19 lockdowns. “When we started building this in 2020, everybody was under so much weight and pressure,” he says. “We felt like this was a good antidote.”
Early Kimono Cats sketches show how the characters’ cute look was established early in the design process.
To start creating the game, Johnson turned to artist and longtime collaborator Ferry Halim, as well as Tanta Vorawatanakul and Ferrari Duanghathai, a pair of developers who happen to be married. “Tanta and Ferrari would provide these charming little characters, and Ferry would come in to add animations — like moving their eyes,” says Johnson. “We iterated a lot on animating the bubbles — how fast they were moving, how many there were, how they were obscured. That was the product of a lot of testing and listening all throughout the development process.”
When we started with this in 2020, everybody was under so much weight and pressure. We felt like this was a good antidote.
Greg Johnson, Kimono Cats
Johnson notes that players can select characters without gender distinction — a detail that he and the Kimono Cats team prioritized from day one. “Whenever any companion kisses the player character on the cheek, a subtle rainbow will appear in the sky over their heads,” Johnson says. “This allows the gender of the cat characters to be open to interpretation by the users.”
Kimono Cats was designed with the simple goal of bringing smiles. “The core concept of throwing darts at bubbles isn't an earth-shaking idea by any stretch,” says Johnson, “but it was a way to interact with the storytelling that I hadn’t seen before, and the festival setting felt like a natural match.”
Find Kimono Cats on Apple Arcade
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Spotlight on: Apple Vision Pro apps for enterprise
Businesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible. We caught up with two of those companies — JigSpace and PTC — to find out how they’re approaching the new world of visionOS.
JigSpace is in the driver’s seat View now PTC is uniting the makers View nowReimagine your enterprise apps on Apple Vision Pro
Discover the languages, tools, and frameworks you’ll need to build and test your apps in visionOS. Explore videos and resources that showcase productivity and collaboration, simulation and training, and guided work. And dive into workflows for creating or converting existing media, incorporating on-device and remote assets into your app, and much more.
Apple Vision Pro at work Keynote Watch now Keynote (ASL) Watch now Platforms State of the Union Watch now Platforms State of the Union (ASL) Watch now Design for Apple Vision ProWWDC sessions
Design for spatial input Watch now Design for spatial user interfaces Watch now Principles of spatial design Watch now Design considerations for vision and motion Watch now Explore immersive sound design Watch nowSample code, articles, documentation, and resources
Developer paths to Apple Vision ProWWDC sessions
Go beyond the window with SwiftUI Watch now Meet SwiftUI for spatial computing Watch now Meet ARKit for spatial computing Watch now What’s new in SwiftUI Watch now Discover Observation in SwiftUI Watch now Enhance your spatial computing app with RealityKit Watch now Build spatial experiences with RealityKit Watch now Evolve your ARKit app for spatial experiences Watch now Create immersive Unity apps Watch now Bring your Unity VR app to a fully immersive space Watch now Meet Safari for spatial computing Watch now Rediscover Safari developer features Watch now Design for spatial input Watch now Explore the USD ecosystem Watch now Explore USD tools and rendering Watch nowSample code, articles, documentation, and resources
Unity – XR Interaction Toolkit package
Unity – How Unity builds applications for Apple platforms
three.js – webGL and WebXR library
babylon.js – webGL and WebXR library
PlayCanvas – webGL and WebXR library
Immersiveweb – WebXR Device API
WebKit.org – Bug tracking for WebKit open source project
Frameworks to exploreWWDC sessions
Discover streamlined location updates Watch now Meet Core Location Monitor Watch now Meet MapKit for SwiftUI Watch now What's new in MapKit Watch now Build spatial SharePlay experiences Watch now Share files with SharePlay Watch now Design spatial SharePlay experiences Watch now Discover Quick Look for spatial computing Watch now Create 3D models for Quick Look spatial experiences Watch now Explore pie charts and interactivity in Swift Charts Watch now Elevate your windowed app for spatial computing Watch now Create a great spatial playback experience Watch now Deliver video content for spatial experiences Watch nowSample code, articles, documentation, and resources
Placing content on detected planes
Incorporating real-world surroundings in an immersive experience
Tracking specific points in world space
Tracking preregistered images in 3D space
Explore a location with a highly detailed map and Look Around
Drawing content in a group session
Supporting Coordinated Media Playback
Adopting live updates in Core Location
Monitoring location changes with Core Location
Access enterprise data and assetsWWDC sessions
Meet Swift OpenAPI Generator Watch now Advances in Networking, Part 1 Watch now Advances in App Background Execution Watch now The Push Notifications primer Watch now Power down: Improve battery consumption Watch now Build robust and resumable file transfers Watch now Efficiency awaits: Background tasks in SwiftUI Watch now Use async/await with URLSession Watch now Meet SwiftData Watch now Explore the USD ecosystem Watch now What’s new in App Store server APIs Watch nowSample code, articles, documentation, and resources
Announcing the Swift Student Challenge 2024
Apple is proud to support and uplift the next generation of student developers, creators, and entrepreneurs. The Swift Student Challenge has given thousands of students the opportunity to showcase their creativity and coding capabilities through app playgrounds, and build real-world skills that they can take into their careers and beyond. From connecting their peers to mental health resources to identifying ways to support sustainability efforts on campus, Swift Student Challenge participants use their creativity to develop apps that solve problems they’re passionate about.
We’re releasing new coding resources, working with community partners, and announcing the Challenge earlier than in previous years so students can dive deep into Swift and the development process — and educators can get a head start in supporting them.
Applications will open in February 2024 for three weeks.
New for 2024, out of 350 overall winners, we’ll recognize 50 Distinguished Winners for their outstanding submissions and invite them to join us at Apple in Cupertino for three incredible days next summer.
Over 30 new developer activities now available
Ready to level up your app or game? Join us around the world for a new set of developer labs, consultations, sessions, and workshops, hosted in person and online throughout November and December.
You can explore:
- App Store activities: Learn about discovery, engagement, in-app events, custom product pages, subscription best practices, and much more.
- Apple Vision Pro developer labs: Apply to attend a lab in Cupertino, London, Munich, New York City, Shanghai, Singapore, Sydney, or Tokyo.
- Apple Vision Pro activities: Learn to design and build an entirely new universe of apps and games for visionOS.
- Design and technology consultations: Sign up for one-on-one guidance on app design, technology implementation, and more.
Discover activities in multiple time zones and languages.
Tax updates for apps, in-app purchases, and subscriptions
The App Store’s commerce and payments system was built to enable you to conveniently set up and sell your products and services on a global scale in 44 currencies across 175 storefronts. Apple administers tax on behalf of developers in over 70 countries and regions and provides you with the ability to assign tax categories to your apps and in‑app purchases.
Periodically, we make updates to rates, categories, and agreements to accommodate new regulations and rate changes in certain regions. As of today, the following updates have been made in App Store Connect.
Tax ratesYour proceeds from the sale of eligible apps and in‑app purchases (including auto‑renewable subscriptions) have been increased to reflect the following reduced value-added tax (VAT) rates. Prices on the App Store haven’t changed.
- Austria: Reduced VAT rates for certain apps in the Video tax category
- Cyprus: Reduced VAT rate of 3% for certain apps in the following tax categories: Books, News Publications, Audiobooks, Magazines and other periodicals
- Vietnam: Eliminated VAT for certain apps in the following tax categories: Books, News Publications, Magazines, and other periodicals
- New Boosting category: Apps and/or in-app purchases that offer resources to provide exposure, visibility, or engagement to enhance the prominence and reach of specific content that’s experienced or consumed in app (such as videos, sales of “boosts” in social media apps, listings, and/or other forms of user-generated content).
- New attribute for books: Textbook or other educational publication used for teaching and studying between ages 5 to 18
- New attributes for videos: Exclusively features live TV broadcasting and/or linear programming. Public TV broadcasting, excluding shopping or infomercial channels.
If any of these categories or attributes are relevant to your apps or in-app purchases, you can review and update your selections in the Pricing and Availability section of My Apps.
Learn about setting tax categories
Paid Applications Agreement- Exhibit C Section 1.2.2: Updated language to clarify the goods and services tax (GST) requirements for developers on the Australia storefront.
Get ready with the latest beta releases
The beta versions of iOS 17.2, iPadOS 17.2, macOS 14.2, tvOS 17.2, and watchOS 10.2 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.1 beta.
To check if a known issue from a previous beta release has been resolved or if there’s a workaround, review the latest release notes. Please let us know if you encounter an issue or have other comments. We value your feedback, as it helps us address issues, refine features, and update documentation.
TestFlight makes it even simpler to manage testers
TestFlight provides an easy way to get feedback on beta versions of your apps, so you can publish on the App Store with confidence. Now, improved controls in App Store Connect let you better evaluate tester engagement and manage participation to help you get the most out of beta testing. Sort testers by status and engagement metrics (like sessions, crashes, and feedback), and remove inactive testers who haven’t engaged. You can also filter by device and OS, and even select relevant testers to add to a new group.
Scary fast.
Watch the October 30 event at apple.com.
New delivery metrics now available in the Push Notifications Console
The Push Notifications Console now includes metrics for notifications sent in production through the Apple Push Notification service (APNs). With the console’s intuitive interface, you’ll get an aggregated view of delivery statuses and insights into various statistics for notifications, including a detailed breakdown based on push type and priority.
Introduced at WWDC23, the Push Notifications Console makes it easy to send test notifications to Apple devices through APNs.
Apple Vision Pro developer labs expand to New York City and Sydney
We’re thrilled with the excitement and enthusiasm from developers around the world at the Apple Vision Pro developer labs, and we’re pleased to announce new labs in New York City and Sydney. Join us to test directly on the device and connect with Apple experts for help with taking your visionOS, iPadOS, and iOS apps even further on this exciting new platform. Labs also take place in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.
Learn about other ways to work with Apple to prepare for visionOS.
“Small but mighty”: How Plex serves its global community
The team behind Plex has a brilliant strategy for dealing with bugs and addressing potential issues: Find them first.
“We’ve got a pretty good process in place,” says Steve Barnegren, Plex senior software engineer on Apple platforms, “and when that’s the case, things don’t go wrong.”
Launched in 2009, Plex is designed to serve as a “global community for streaming content,” says engineering manager Alex Stevenson-Price, who’s been with Plex for more than seven years. A combination streaming service and media server, Plex aims to cover the full range of the streaming experience — everything from discovery to content management to organizing watchlists.
This allows us more time to investigate the right solutions.
Ami Bakhai, Plex product manager for platforms and partners
To make it all run smoothly, the Plex team operates on a six-week sprint, offering regular opportunities to think in blocks, define stop points in their workflow, and assess what’s next. “I’ve noticed that it provides more momentum when it comes to finalizing features or moving something forward,” says Ami Bakhai, product manager for platforms and partners. “Every team has their own commitments. This allows us more time to investigate the right solutions.”
The Plex team iterates, distributes, and releases quickly — so testing features and catching issues can be a tall order. (Plex releases regular updates during their sprints for its tvOS flagship, iOS, iPadOS, and macOS apps.)
Though Plex boasts a massive reach across all the platforms, it’s not powered by a massive number of people. The fully remote team relies on a well-honed mix of developer tools (like Xcode Cloud and TestFlight), clever internal organization, Slack integration, and a thriving community of loyal beta testers that stretches back more than a decade. “We’re relatively small,” says Danni Hemberger, Plex director of product marketing, “but we’re mighty.”
Over the summer, the Plex team made a major change to their QA process: Rather than bringing in their QA teams right before the release, they shifted QA to a continuous process that unfolds over every pull request. “The QA team would find something right at the end, which is when they’d start trying to break everything,” laughs Barnegren. “Now we can say, ‘OK, ten features have gone in, and all of them have had QA eyes on them, so we’re ready to press the button.’”
Now we can say, ‘OK, ten features have gone in, and all of them have had QA eyes on them, so we’re ready to press the button.'
Steve Barnegren, Plex senior software engineer on Apple platforms
The continuous QA process is a convenient mirror to the continuous delivery process. Previously, Plex tested before a new build was released to the public. Now, through Xcode Cloud, Plex sends nightly builds to all their employees, ensuring that everyone has access to the latest version of the app.
Once the release has been hammered out internally, it moves on to Plex’s beta testing community, which might be more accurately described as a beta testing city. It numbers about 8,000 people, some of whom date back to Plex’s earliest days. “That constant feedback loop is super valuable, especially when you have power users that understand your core product,” says Stevenson-Price.
All this feedback and communication is powered by TestFlight and Plex’s customer forums. “This is especially key because we have users supplying personal media for parts of the application, and that can be in all kinds of rare or esoteric formats,” says Barnegren.
[CI] is a safety net. Whenever you push code, your app is being tested and built in a consistent way. That’s so valuable, especially for a multi-platform app like ours.
Alex Stevenson-Price, Plex engineering manager
To top it all off, this entire process is automated with every new feature and every new bug fix. Without any extra work or manual delivery, the Plex team can jump right on the latest version — an especially handy feature for a company that’s dispersed all over the globe. “It’s a great reminder of ‘Hey, this is what’s going out,’ and allows my marketing team to stay in the loop,” says Hemberger.
It’s also a great use of a continuous integration system (CI). “I’m biased from my time spent as an indie dev, but I think all indie devs should try a CI like Xcode Cloud,” says Stevenson-Price. “I think some indies don’t always see the benefit on paper, and they’ll say, ‘Well, I build the app myself, so why do I need a CI to build it for me?’ But it’s a safety net. Whenever you push code, your app is being tested and built in a consistent way. That’s so valuable, especially for a multi-platform app like ours. And there are so many tools at your disposal. Once you get used to that, you can’t go back.”
The gorgeous gadgets of Automatoys
Steffan Glynn’s Automatoys is a mix between a Rube Goldberg machine and a boardwalk arcade game — and there’s a very good reason why.
In 2018, the Cardiff-based developer visited the Musée Mécanique, a vintage San Francisco arcade packed with old-timey games, pinball machines, fortune tellers, and assorted gizmos. On that same trip, he stopped by an exhibit of Rube Goldberg sketches that showcased page after page of wildly intricate machines. “It was all about the delight of the pointless and captivating,” Glynn says. “There was a lot of crazy inspiration on that trip.”
An early sketch of the ramps, mazes, and machines that combine to create the puzzles in Automatoys.
That inspiration turned into Automatoys, an Apple Design Award finalist for Interaction in games. Automatoys is a single-touch puzzler in which players roll their marble from point A to point B by navigating a maze of ramps, elevators, catapults, switches, and more. True to its roots, the game is incredibly tactile; every switch and button feels lifelike, and players even insert a virtual coin to launch each level. And it unfolds to a relaxing and jazzy lo-fi soundtrack. “My brief to the sound designer was, ‘Please make this game less annoying,’” Glynn laughs.
While Automatoys’ machines may be intricate, its controls are anything but. Every button, claw, and catapult is controlled by a single tap. “And it doesn’t matter where you tap — the whole machine moves at once,” Glynn says. The mechanic doesn’t just make the game remarkably simple to learn; it also creates a sense of discovery. “I like that moment when the player is left thinking, ‘OK, well, I guess I’ll just start tapping and find out what happens.’”
To create levels in Automatoys, Steffan Glynn worked directly in the 3D space, starting with a basic model (top left) and creating obstacles until he reached a finished whole (bottom right).
To design each of the game’s 12 levels, Glynn first sketched his complex contraptions in Procreate. The ideas came fast and furious, but he found that building what he’d envisioned in his sketches proved elusive — so he changed his strategy. “I started playing with shapes directly in 3D space,” he says. “Once a level had a satisfying form, I’d then try to imagine what sort of obstacle each part could be. One cylinder would become a ferris wheel, another would become a spinning helix for the ball to climb, a square panel would become a maze, and so on.”
Getting your marble from point A to point B is as simple as this.
The game was a four-year passion project for Glynn, a seasoned designer who in 2018 left his gig with State of Play (where he contributed to such titles as Lumino City and Apple Design Award winner INKS.) to focus on creating “short, bespoke” games. There was just one catch: Though he had years of design experience, he’d never written a single line of code. To get up to speed, he threw himself into video tutorials and hands-on practice.
Welsh developer Steffan Glynn set out on his own in 2018 to create “short, bespoke” games.
In short order, Glynn was creating Unity prototypes of what would become Automatoys. “As a designer, being able to prototype and test ideas is incredibly liberating. When you have those tools, you can quickly try things out and see for yourself what works.”
Download Automatoys from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Hello Developer: October 2023
Find out about our latest activities (including more Apple Vision Pro developer lab dates), learn how the Plex team embraced Xcode Cloud, discover how the inventive puzzle game Automatoys came to life, catch up on the latest news, and more.
Meet with Apple ExpertsHosted in person and online, our developer activities are for everyone, no matter where you are on your development journey. Find out how to enhance your existing app or game, refine your design, or launch a new project. Explore the list of upcoming activities worldwide.
Get ready for a new round of Apple Vision Pro developer lab datesDevelopers have been thrilled to experience their apps and games in the labs, and connect with Apple experts to help refine their ideas and answer questions. Ben Guerrette, chief experience officer at Spool, says, “That kind of learning experience is incredibly valuable.” Developer and podcaster David Smith says, “The first time you see your own app running for real is when you get the audible gasp.” Submit a lab request.
You can also request Apple Vision Pro compatibility evaluations. We’ll evaluate your apps and games directly on Apple Vision Pro to make sure they behave as expected and send you the results.
“Small but mighty”: Go behind the scenes with PlexDiscover how the streaming service and media player uses developer tools like Xcode Cloud to maintain a brisk release pace. “We’re relatively small,” says Danni Hemberger, director of product marketing at Plex, “but we’re mighty.”
“Small but mighty”: How Plex serves its global community View now Meet the mind behind Automatoys“I like the idea of a moment where players are left to say, ‘Well, I guess I’ll just start tapping and see what happens,’” says Steffan Glynn of Apple Design Award finalist Automatoys, an inspired puzzler in which players navigate elaborate contraptions with a single tap. Find out how Glynn brought his Rube Goldberg-inspired game to life.
The gorgeous gadgets of Automatoys View now Catch up on the latest news and updatesMake sure you’re up to date on feature announcements, important guidance, new documentation, and more.
-
Get ready with the latest beta releases: Build your apps using the latest developer tools and test them on the most recent OS releases. Download Xcode 15.1 beta, and the beta 2 versions of iOS 17.1, iPadOS 17.1, macOS 14.1, tvOS 17.1, and watchOS 10.1.
-
Use RealityKit to create an interactive ride in visionOS: The developer sample project “Swift Splash” leverages RealityKit and Reality Composer Pro to create a waterslide by combining modular slide pieces. And once you finish your ride, you can release an adventurous goldfish to try it out.
-
Take your iPad and iPhone apps even further on Apple Vision Pro: A brand‑new App Store will launch with Apple Vision Pro, featuring apps and games built for visionOS, as well as hundreds of thousands of iPad and iPhone apps that run great on visionOS too.
-
App Store Connect API 3.0: This release includes support for Game Center, pre-orders by region, and more.
-
Debugging universal links: Investigate why your universal links are opening in Safari instead of your app.
We’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Get ready with the latest beta releases
The beta versions of iOS 17.1, iPadOS 17.1, macOS 14.1, tvOS 17.1, and watchOS 10.1 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.
To check if a known issue from a previous beta release has been resolved or if there’s a workaround, review the latest release notes. Please let us know if you encounter an issue or have other feedback. We value your feedback, as it helps us address issues, refine features, and update documentation.
Meet with Apple Experts
Join us around the world for a variety of sessions, consultations, labs, and more — tailored for you.
Apple developer activities are for everyone, no matter where you are on your development journey. Activities take place all year long, both online and in person around the world. Whether you’re looking to enhance your existing app or game, refine your design, or launch a new project, there’s something for you.
Pre-orders by region now available
Offering your app or game for pre-order is a great way to build awareness and excitement for your upcoming releases on the App Store. And now you can offer pre-orders on a regional basis. People can pre-order your app in a set of regions that you choose, even while it’s available for download in other regions at the same time. With this new flexibility, you can expand your app to new regions by offering it for pre-order and set different release dates for each region.
App Store submissions now open for the latest OS releases
iOS 17, iPadOS 17, macOS Sonoma, tvOS 17, and watchOS 10 will soon be available to customers worldwide. Build your apps and games using the Xcode 15 Release Candidate and latest SDKs, test them using TestFlight, and submit them for review to the App Store. You can now start deploying seamlessly to TestFlight and the App Store from Xcode Cloud. With exciting new capabilities, as well as major enhancements across languages, frameworks, tools, and services, you can deliver even more unique experiences on Apple platforms.
Xcode and Swift. Xcode 15 enables you to code and design your apps faster with enhanced code completion, interactive previews, and live animations. Swift unlocks new kinds of expressive and intuitive APIs by introducing macros. The new SwiftData framework makes it easy to persist data using declarative code. And SwiftUI brings support for creating more sophisticated animations with phases and keyframes, and simplified data flows using the new Observation framework.
Widgets and Live Activities. Widgets are now interactive and run in new places, like StandBy on iPhone, the Lock Screen on iPad, the desktop on Mac, and the Smart Stack on Apple Watch. With SwiftUI, the system adapts your widget’s color and spacing based on context, extending its usefulness across platforms. Live Activities built with WidgetKit and ActivityKit are now available on iPad to help people stay on top of what’s happening live in your app.
Metal. The new game porting toolkit makes it easier than ever to bring games to Mac and the Metal shader converter dramatically simplifies the process of converting your game’s shaders and graphics code. Scale your games and production renderers to create even more realistic and detailed scenes with the latest updates to ray tracing. And take advantage of many other enhancements that make it even simpler to deliver fantastic games and pro apps on Apple silicon.
App Shortcuts. When you adopt App Shortcuts, your app’s key features are now automatically surfaced in Spotlight, letting people quickly access the most important views and actions in your app. A new design makes running your app’s shortcuts even simpler and new natural language capabilities let people execute your shortcuts with their voice with more flexibility.
App Store. It’s now even simpler to merchandise your in-app purchases and subscriptions across all platforms with new SwiftUI views in StoreKit. You can also test more of your product offerings using the latest enhancements to StoreKit testing in Xcode, the Apple sandbox environment, and TestFlight. With pre-orders by region, you can build customer excitement by offering your app in new regions with different release dates. And with the most dynamic and personalized app discovery experience yet, the App Store helps people find more apps through tailored recommendations based on their interests and preferences.
And more. Learn about advancements in machine learning, Object Capture, Maps, Passkeys, SharePlay, and so much more.
Starting in April 2024, apps submitted to the App Store must be built with Xcode 15 and the iOS 17 SDK, tvOS 17 SDK, or watchOS 10 SDK (or later).
Apple Entrepreneur Camp applications now open
Apple Entrepreneur Camp supports underrepresented founders and developers, and encourages the pipeline and longevity of these entrepreneurs in technology. Building on the success of our alumni from cohorts for female*, Black, and Hispanic/Latinx founders, starting this fall, we’re expanding our reach to welcome professionals from Indigenous backgrounds who are looking to enhance and grow their existing app-driven businesses. Attendees benefit from one-on-one code-level guidance, receive insight, inspiration, and unprecedented access to Apple engineers and experts, and become part of the extended global network of Apple Entrepreneur Camp alumni.
Applications are now open for founders and developers from these groups who have either an existing app on the App Store, a functional beta build in TestFlight, or the equivalent. Attendees will join us online starting in October 2023. We welcome eligible entrepreneurs with app-driven organizations to apply and we encourage you to share these details with those who may be interested.
Apply by September 24, 2023.
* Apple believes that gender expression is a fundamental right. We welcome all women to apply to this program.
Take your iPad and iPhone apps even further on Apple Vision Pro
A brand‑new App Store will launch with Apple Vision Pro, featuring apps and games built for visionOS, as well as hundreds of thousands of iPad and iPhone apps that run great on visionOS too. Users can access their favorite iPad and iPhone apps side by side with new visionOS apps on the infinite canvas of Apple Vision Pro, enabling them to be more connected, productive, and entertained than ever before. And since most iPad and iPhone apps run on visionOS as is, your app experiences can easily extend to Apple Vision Pro from day one — with no additional work required.
Timing. Starting this fall, an upcoming developer beta release of visionOS will include the App Store. By default, your iPad and/or iPhone apps will be published automatically on the App Store on Apple Vision Pro. Most frameworks available in iPadOS and iOS are also included in visionOS, which means nearly all iPad and iPhone apps can run on visionOS, unmodified. Customers will be able to use your apps on visionOS early next year when Apple Vision Pro becomes available.
Making updates, if needed. In the case that your app requires a capability that is unavailable on Apple Vision Pro, App Store Connect will indicate that your app isn’t compatible and it won’t be made available. To make your app available, you can provide alternative functionality, or update its UIRequiredDeviceCapabilities. If you need to edit your existing app’s availability, you can do so at any time in App Store Connect.
To see your app in action, use the visionOS simulator in Xcode 15 beta. The simulator lets you interact with and easily test most of your app’s core functionality. To run and test your app on an Apple Vision Pro device, you can submit your app for a compatibility evaluation or sign up for a developer lab.
Beyond compatibility. If you want to take your app to the next level, you can make your app experience feel more natural on visionOS by building your app with the visionOS SDK. Your app will adopt the standard visionOS system appearance and you can add elements, such as 3D content tuned for eyes and hands input. To learn how to build an entirely new app or game that takes advantage of the unique and immersive capabilities of visionOS, view our design and development resources.
Watch the special Apple Event
Watch the replay from September 12 at apple.com.
Updated Apple Developer Program License Agreement now available
The Apple Developer Program License Agreement has been revised to support upcoming features and updated policies, and to provide clarification. The revisions include:
-
Definitions, Section 3.3.39: Specified requirements for use of the Journaling Suggestions API.
-
Schedule 1 Exhibit D Section 3 and Schedules 2 and 3 Exhibit E Section 3: Added language about the Digital Services Act (DSA) redress options available to developers based in the European Union.
-
Schedule 1 Section 6.3 and Schedules 2 and 3 Section 7.3: Added clarifying language that the content moderation process is subject to human and systematic review and action pursuant to notices of illegal and harmful content.
Inside the Apple Vision Pro labs
As CEO of Flexibits, the team behind successful apps like Fantastical and Cardhop, Michael Simmons has spent more than a decade minding every last facet of his team’s work. But when he brought Fantastical to the Apple Vision Pro labs in Cupertino this summer and experienced it for the first time on the device, he felt something he wasn’t expecting.
“It was like seeing Fantastical for the first time,” he says. “It felt like I was part of the app.”
That sentiment has been echoed by developers around the world. Since debuting in early August, the Apple Vision Pro labs have hosted developers and designers like Simmons in London, Munich, Shanghai, Singapore, Tokyo, and Cupertino. During the day-long lab appointment, people can test their apps, get hands-on experience, and work with Apple experts to get their questions answered. Developers can apply to attend if they have a visionOS app in active development or an existing iPadOS or iOS app they’d like to test on Apple Vision Pro.
Learn more about Apple Vision Pro developer labs
For his part, Simmons saw Fantastical work right out of the box. He describes the labs as “a proving ground” for future explorations and a chance to push software beyond its current bounds. “A bordered screen can be limiting. Sure, you can scroll, or have multiple monitors, but generally speaking, you’re limited to the edges,” he says. “Experiencing spatial computing not only validated the designs we’d been thinking about — it helped us start thinking not just about left to right or up and down, but beyond borders at all.”
And as not just CEO but the lead product designer (and the guy who “still comes up with all these crazy ideas”), he came away from the labs with a fresh batch of spatial thoughts. “Can people look at a whole week spatially? Can people compare their current day to the following week? If a day is less busy, can people make that day wider? And then, what if like you have the whole week wrap around you in 360 degrees?” he says. “I could probably — not kidding — talk for two hours about this.”
‘The audible gasp’David Smith is a prolific developer, prominent podcaster, and self-described planner. Shortly before his inaugural visit to the Apple Vision Pro developer labs in London, Smith prepared all the necessary items for his day: a MacBook, Xcode project, and checklist (on paper!) of what he hoped to accomplish.
All that planning paid off. During his time with Apple Vision Pro, “I checked everything off my list,” Smith says. “From there, I just pretended I was at home developing the next feature.”
I just pretended I was at home developing the next feature.
David Smith, developer and podcaster
Smith began working on a version of his app Widgetsmith for spatial computing almost immediately after the release of the visionOS SDK. Though the visionOS simulator provides a solid foundation to help developers test an experience, the labs offer a unique opportunity for a full day of hands-on time with Apple Vision Pro before its public release. “I’d been staring at this thing in the simulator for weeks and getting a general sense of how it works, but that was in a box,” Smith says. “The first time you see your own app running for real, that’s when you get the audible gasp.”
Smith wanted to start working on the device as soon as possible, so he could get “the full experience” and begin refining his app. “I could say, ‘Oh, that didn’t work? Why didn’t it work?’ Those are questions you can only truly answer on-device.” Now, he has plenty more plans to make — as evidenced by his paper checklist, which he holds up and flips over, laughing. “It’s on this side now.”
‘We understand where to go’When it came to testing Pixite’s video creator and editor Spool, chief experience officer Ben Guerrette made exploring interactions a priority. “What’s different about our editor is that you’re tapping videos to the beat,” he says. “Spool is great on touchscreens because you have the instrument in front of you, but with Apple Vision Pro you’re looking at the UI you’re selecting — and in our case, that means watching the video while tapping the UI.”
The team spent time in the lab exploring different interaction patterns to address this core challenge. “At first, we didn’t know if it would work in our app,” Guerrette says. “But now we understand where to go. That kind of learning experience is incredibly valuable: It gives us the chance to say, ‘OK, now we understand what we’re working with, what the interaction is, and how we can make a stronger connection.’”
Chris Delbuck, principal design technologist at Slack, had intended to test the company’s iPadOS version of their app on Apple Vision Pro. As he spent time with the device, however, “it instantly got me thinking about how 3D offerings and visuals could come forward in our experiences,” he says. “I wouldn’t have been able to do that without having the device in hand.”
‘That will help us make better apps’As lab participants like Smith continue their development at home, they’ve brought back lessons and learnings from their time with Apple Vision Pro. “It’s not necessarily that I solved all the problems — but I solved enough to have a sense of the kinds of solutions I’d likely need,” Smith says. “Now there’s a step change in my ability to develop in the simulator, write quality code, and design good user experiences.”
I've truly seen how to start building for the boundless canvas.
Michael Simmons, Flexibits CEO
Simmons says that the labs offered not just a playground, but a way to shape and streamline his team’s thinking about what a spatial experience could truly be. “With Apple Vision Pro and spatial computing, I’ve truly seen how to start building for the boundless canvas — how to stop thinking about what fits on a screen,” he says. “And that will help us make better apps.”
Helping customers resolve billing issues without leaving your app
As announced in April, your customers will soon be able to resolve payment issues without leaving your app, making it easier for them to stay subscribed to your content, services, and premium features.
Starting August 14, 2023, if an auto-renewable subscription doesn’t renew because of a billing issue, a system-provided sheet will appear in your app with a prompt that lets customers update the payment method for their Apple ID. You can test this sheet in Sandbox, as well as delay or suppress it using messages and display in StoreKit. This feature is available in iOS 16.4 and iPadOS 16.4 or later, and no action is required to adopt it.
List of APIs that require declared reasons now available
Apple is committed to protecting user privacy on our platforms. We know that there are a small set of APIs that can be misused to collect data about users’ devices through fingerprinting, which is prohibited by our Developer Program License Agreement. To prevent the misuse of these APIs, we announced at WWDC23 that developers will need to declare the reasons for using these APIs in their app’s privacy manifest. This will help ensure that apps only use these APIs for their intended purpose. As part of this process, you’ll need to select one or more approved reasons that accurately reflect how your app uses the API, and your app can only use the API for the reasons you’ve selected.
Starting in fall 2023, when you upload a new app or app update to App Store Connect that uses an API (including from third-party SDKs) that requires a reason, you’ll receive a notice if you haven’t provided an approved reason in your app’s privacy manifest. And starting in spring 2024, in order to upload your new app or app update to App Store Connect, you’ll be required to include an approved reason in the app’s privacy manifest which accurately reflects how your app uses the API.
If you have a use case for an API with required reasons that isn’t already covered by an approved reason and the use case directly benefits the people using your app, let us know.
Meet with App Store experts
Join us for online sessions August 1 through 24 to learn about the latest App Store features and get your questions answered. Live presentations with Q&A are being held in multiple time zones and languages.
- Explore App Store pricing upgrades, including enhanced global pricing, tools to manage pricing by storefront, and additional price points.
- Find out how to measure user acquisition with App Analytics and grow your subscription business using App Store features.
- Discover how product page optimization lets you test different elements of your product page to find out which resonate with people most.
- Understand how custom product pages let you create additional product page versions to highlight specific features or content.
- Learn how to boost discovery and engagement with Game Center and how to configure in-app events.
Take your apps and games beyond the visionOS simulator
Apple Vision Pro compatibility evaluations
We can help you make sure your visionOS, iPadOS, and iOS apps behave as expected on Vision Pro. Align your app with the newly published compatibility checklist, then request to have your app evaluated directly on Vision Pro.
Apple Vision Pro developer labsExperience your visionOS, iPadOS, and iOS apps running on Vision Pro. With support from Apple, you’ll be able to test and optimize your apps for the infinite spatial canvas. Labs are available in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.
Apple Vision Pro developer kitHave a great idea for a visionOS app that requires building and testing on Vision Pro? Apply for a Vision Pro developer kit. With continuous, direct access to Vision Pro, you’ll be able to quickly build, test, and refine your app so it delivers amazing spatial experiences on visionOS.
Recent content on Mobile A11y
iOS Accessibility Values
For iOS, Accessibility values are one of the building blocks of how Accessibility works on the platform, along with traits, labels, hints, and showing/hiding elements. If you’re familiar with WCAG or web accessibility, accessibility values are the value part of WCAG 4.1.2: Name, Role, Value. Values Not every element in your view will have a value - in fact, most won’t. Any element that ‘contains’ some data, data that is not included in the element’s label requires an accessibility value to be set.
iOS UIKit Accessibility traits
Accessibility traits on iOS is the system by which assistive technologies know how to present your interface to your users. The exact experience will vary between assistive technologies, in some cases they may change the intonation of what VoiceOver reads, or add additional options for navigation, sometimes they will disable that assistive technology from accessing the element, or change how the assistive tech functions. They are the ‘Role’ part of the fundamental rule of making something accessible to screen readers - WCAG’s 4.
iOS Custom Accessibility Actions
When testing your app with VoiceOver or Switch Control, a common test is to ensure you can reach every interactive element on screen. If these assistive technologies can’t focus all of your buttons how will your customers be able to interact fully with your app? Except there are times when hiding buttons from your assistive technology users is the better choice. Consider an app with a table view that has many repeating interactive elements - this could be a social media app where ’like, share, reply’ etc is repeated for each post.
Test Your App's Accessibility with Evinced
Disclosure: Evinced has paid for my time in writing this blog, and I have provided them feedback on the version of their tool reviewed and an early beta. I agreed to this because I believe in the product they are offering. Testing your app for accessibility is an essential part of making an accessible app, as with any part of the software you build, if you don’t test it, how can you be sure it works?
How Do I Get My App an Accessibility Audit?
This is a common question I get asked - how do I go about arranging an accessibility audit for my app so I know where I can make improvements? If you’re truly looking for an answer to that question then I have a few options for you below, but first, are you asking the right question? Accessibility Isn’t About Box Ticking You can’t make your app accessible by getting a report, fixing the findings, and accepting it as done.
Quick Win - Start UI Testing
I’ll admit, adding UI testing to an app that currently doesn’t have it included is probably stretching the definition of quick win, but the aim here isn’t 100% coverage - not right away anyway. Start small and add to your test suite as you gain confidence. Even a small suite of crucial happy-path UI tests will help to ensure and persist accessibility in your app. And the more you get comfortable with UI tests the more accessible your apps will become, because an app that is easy to test is also great for accessibility.
Quick Win - Support Dark Mode
Many people don’t realise dark mode is an accessibility feature. It’s often just considered a nice to have, a cool extra feature that power users will love. But dark mode is also a valuable accessibility feature. Some types of visual impairment can make it painful to look at bright colours, or large blocks of white might wash over the black text. Some people with dyslexia or Irlen’s Syndrome can struggle to read black text on a white background.
Quick Win - Support Landscape
If you have a regulatory requirement to provide accessibility in your app (spoiler, you do) the chances are it will say you have a requirement to reach WCAG AA. While this is likely meaningless to anyone other an accessibility professionals, in short it means you are providing the minimum level of accessibility features required to make your app usable by the majority of people. This post is about one such requirement, the jazzily titled Success Criterion 1.
Quick Win - Image Descriptions
Images are a major part of our apps. They add meaning and interest, they give your app character and context. The adage is that a picture is worth a thousand words. But if you can’t see the image clearly, how do you know what those words are? If you aren’t providing image descriptions in your app many of your users will be missing out on the experience you’ve crafted. The result can be an app thats missing that spark an character, or worse an app thats just meaningless and unusable.
Quick Win - Text Contrast
How many shades of grey do you use in your app? OK, maybe thats a bit cruel towards designers, grey is a great colour, but the problem with grey is that it can be deceptively difficult to distinguish from a background. And this problem is not just limited to greys - lighter colours too can blend into the background. This effect can be heightened too for people who have blurred or obscured vision, or one of many forms of colour blindness.
iOS 14: Custom Accessibility Content
Each year at WWDC Xcode Santa brings us exciting new APIs to play with, and this year our accessibility present is Customized Accessibility Content. This API flew under the radar a little, I’m told this is because it’s so new there wasn’t even time for inclusion at WWDC. But this new feature helps to solve a difficult question when designing a VoiceOver interface - where is the balance between too much and too little content.
Accessibility Review: Huh? - International languages
The Accessibility Review series uses real world apps to provide examples of common accessibility issues and provide tips on how to fix them. Each of the developers has kindly volunteered their app to be tested. Huh? is a dictionary and thesaurus app from Peter Yaacoub. Enter a word into the search bar then choose a dictionary service. Press search and the app will present your chosen service’s entry for the term you entered.
Accessibility Review: Figure Case - Button Labels
The Accessibility Review series uses real world apps to provide examples of common accessibility issues and provide tips on how to fix them. Each of the developers has kindly volunteered their app to be tested. Figure Case is an app to help organise a tabletop miniature collection created by Simon Nickel. The app helps to track miniatures you own, and what state they currently find themselves in - unassembled, assembled, or painted.
Accessibility Review: Daily Dictionary - Screen changes
The Accessibility Review series uses real world apps to provide examples of common accessibility issues and provide tips on how to fix them. Each of the developers has kindly volunteered their app to be tested. Daily Dictionary is an app from Benjamin Mayo providing a new word every day with definitions and real-world uses designed to help increase your vocabulary. Assessing the app, I noticed Benjamin has made a design decision around presenting the app’s settings.
iOS Attributed Accessibility Labels
Attributed accessibility labels are an incredible tool for making some next-level accessible experiences. They let you tell VoiceOver not just what to speak, but how to say it too. Using the accessibilityAttributedLabel property you can provide an NSAttributedString to VoiceOver, much the same way you would provide an NSAttributedString to a label’s attributedText property to display a string with an underline or character colour for example. The difference here is that all of our attributes are instructions for VoiceOver.
Writing Great iOS Accessibility Labels
A good accessibility label lets your customer know exactly what a control does in as few words as possible, without having to rely on implied context. Don’t Add the Element Type iOS already knows your button is a button and your image is an image, it does this using an accessibility trait. If you label your button as ‘Play button’ your VoiceOver customers will hear ‘Play button. Button.’ Keep it Succinct Don’t frustrate your customer by adding too much information to your labels.
When to use Accessibility Labels
There’s a few circumstances when you’ll want to set your own accessibility label, such as… An interactive element that you haven’t given a text value to, such as an image button. An interactive element with a long visual label. An interactive element with a short visual label that takes context from your design. A control or view you have created yourself or built by combining elements. Images of text. Elements Without a text value Take the controls for a music player as an example.
iOS Accessibility Labels
This blog was inspired by Jeff Watkins’ series of blogs on UIButton. UIButton is a fundamental part of building interfaces on iOS. So much so, that it probably doesn’t get the love it deserves. But it’s also really powerful and customisable when used correctly. Accessibility labels on iOS I feel are very similar. They’re fundamental to how accessibility works on iOS, yet I think they suffer from a few PR issues.
A11y Box Android
A few months ago I shared a project I’d been working on for iOS exploring the accessibility API available on that platform. The Android accessibility API is equally large and full featured, and really deserves the same treatment. So here’s A11y Box for Android. A11y Box for Android is an exploration of what is available on the Android accessibility api and how you can make use of it in your apps.
Mobile A11y Talk: Accessibility in SwiftUI
I was supposed to be attending the 2020 CSUN Assistive Technology conference to present a couple of talks, unfortunately with COVID-19 starting to take hold at that time, I wasn’t able to attend. In lieu of attending I decided to record one of the talks I was scheduled to present on Accessibility in SwiftUI. SwiftUI is Apple’s new paradigm for creating user interfaces on Apple platforms, and it has a bunch of new approaches that really help create more accessible experiences.
A11y Box iOS
iOS’ UIAccessibility API is huge. I like to think I know it pretty well, but I’m always being surprised by discovering features I previously had no idea about. Like many things on iOS, the documentation for UIAccessibility is not always complete, even for parts of the API that have been around for years. In an attempt to help spread the knowledge of some of the awesome things UIAccessibility is capable of, I’ve created A11y Box for iOS.
Android Live Regions
Live Regions are one of my favourite accessibility features on Android. They’re a super simple solution to a common accessibility problem that people with visual impairments can stumble across. Say you have a game app, really any type of game. Your user interacts with the play area, and as they do, their score increases or decreases depending on your customer’s actions. In this example, the score display is separate to the element your customer is interacting with.
A11yUITests: An XCUI Testing library for accessibility
A11yUITests is an extension to XCTestCase that adds tests for common accessibility issues that can be run as part of an XCUITest suite. I’ve written a detailed discussion of the tests available if you’re interested in changing/implementing these tests yourself. Alternatively, follow this quick start guide. Getting Started Adding A11yUITests I’m assuming you’re already familiar with cocoapods, if not, cocoapods.org has a good introduction. There is one minor difference here compared to most cocoapods, we’re not including this pod in our app, but our app’s test bundle.
XCUITests for accessibility
For a while now I’ve been looking at possibilities for automated accessibility testing on iOS. Unfortunately, I’ve not found any option so far that I’m happy with. I am a big fan of Apple’s XCUI Test framework. Although it has its limitations, I believe there’s scope for creating valid accessibility tests using this framework. Over the last few months I’ve been trying things out, and here’s what I’ve come up with.
Resources
This is a personally curated list of resources I have used and think others may find helpful too. I’m always looking for new high quality mobile accessibility and inclusion resources to add here. Please share any you find with me via email or Twitter. Code Android Android Developers: Build more accessible appsAndroid’s developer documentation for Accessibility, including design, building & testing. With videos, code samples, and documentation. Android: Make apps more accessibleGoogle’s guide to improving accessibility on Android
Review: Accessibility for Everyone - Laura Kalbag
Laura’s introduction to web accessibility jumped out to me because it’s available as an audiobook. Being dyslexic I struggle to read, so prefer to listen to audiobooks where available. Unfortunately, most technical books aren’t available as audiobooks for a couple of potentially obvious reasons. Hearing code or descriptions of diagrams and illustrations read aloud may not be the best experience for an audiobook. As such, this book choses to leave those out of the audio version.
A11y is not accessible
Accessibility is a long word. It’s not the simplest of words to read or to spell, so it seems like a word that would be a good candidate for abbreviation. The common abbreviation of accessibility is a11y. We take the A and Y from the beginning and end of accessibility, and 11 for the number of letters in between. This abbreviation also creates a pleasing homophone for ‘ally.’ The irony of this abbreviation is that a11y isn’t accessible.
About Mobile A11y
About Mobile A11y Mobile A11y is a collection of blogs and resources about how we as mobile developers can improve accessibility on mobile devices. From time to time the blog might also touch on related topics such as digital inclusion, and other topics around ethics in technology. The site is aimed at mobile developers and is written by a mobile developer. I hope this means other mobile developers will find the content relatable and engaging, and you’ll find learning about mobile accessibility along with me helpful.
SwiftUI Accessibility
Accessibility is important. We can take that as a given. But as iOS devs we’re not always sure how to make the most of the accessibility tools that Apple have provided us. We’re lucky as iOS developers that we work on such a forward-thinking accessibility platform. Many people consider Apple’s focus on accessibility for iOS as the driver for other technology vendors to include accessibility features as standard. To the point that we now consider accessibility an expected part of any digital platform.
SwiftUI Accessibility: Semantic Views
Semantic views are not new to SwiftUI, but changes in SwiftUI mean creating them is simple. Semantic views are not so much a language feature. They’re more a technique for manipulating the accessible user interface and improving the experience for assistive technology users. A what view? A semantic view is not one view, but a collection of views grouped together because they have meaning (or semantic) together. Take a look at this iOS table view cell from the files app.
SwiftUI Accessibility: User Settings
SwiftUI allows us to read environmental values that might affect how we want to present our UI. Things like size classes and locale for example. We also get the ability to read some of the user’s chosen accessibility settings allowing us to make decisions that will best fit with your customer’s preference. Why? Before we cover what these options are and how to detect them I think it’s important to briefly cover why we need to detect them.
SwiftUI Accessibility: Attributes
When a customer enables an assistive technology to navigate your app the interface that technology navigates isn’t exactly the same as the one visible on the screen. They’re navigating a modified version that iOS creates especially for assistive technology. This is known as the accessibility tree or accessible user interface. iOS does an incredible job at creating the AUI for you from your SwiftUI code. We can help iOS in creating this by tweaking some element’s accessibility attributes.
SwiftUI Accessibility: Traits
Accessibility traits are a group of attributes on a SwiftUI element. They inform assistive technologies how to interact with the element or present it to your customer. Each element has a selection of default traits, but you might need to change these as you create your UI. In SwiftUI there are two modifiers to use for traits, .accessibility(addTraits: ) and .accessibility(removeTraits: ) which add or remove traits respectively. Each modifier takes as its argument either a single accessibility trait or a set of traits.
Review: Design Meets Disability - Graham Pullin
Design Meets Disability was recommended to me by accessibility consultant Jon Gibbins while we were sharing a long train journey through mid-Wales. We were talking, amongst many things, about our love for Apple products and their design. I am a hearing aid wearer, my aid is two-tone grey. A sort of dark taupe grey above, and a darker, almost gun-metal grey below. There’s a clear tube into my ear. This is fine, I don’t hate it.
Podcast: iPhreaks - iOS Accessibility
I was asked to guest on the iPhreaks podcast to discuss iOS accessibility. We talked about why accessibility is important, how you can improve it in your apps, and some of the changes iOS 13 and SwiftUI bring. unfortunatley iPhreaks don’t provide a transcript, but they do provide a comprehensive write-up on their site.
SwiftUI Accessibility: Accessible User Interface
Take a look at your app. Notice the collection of buttons, text, images, and other controls you can see and interact with that make up your app’s user interface. When one of your customers navigates your app with Voice Control, Switch Control, VoiceOver, or any other assistive technology, this isn’t the interface they’re using. Instead, iOS creates a version of your interface for assistive technology to use. This interface is generally known as the accessibility tree.
Mobile A11y Talk: Accessibility without the 'V' Word
I was honoured in 2019 to be able to give my first full conference talk at CodeMobile. I was then lucky enough to be able to repeat that talk at NSLondon, NSManchester, and SWMobile meetups. As an iOS developer, I know accessibility is important for a huge range of people. But at times I think I can treat it like an afterthought. Accessibility Without the ‘V’ Word covers a skill I think we as software engineers would benefit from developing - empathy towards our users.
SwiftUI Accessibility: Sort Priority
Assistive technology, such as VoiceOver, works in natural reading direction. In English, and most other languages, this means top left through to the bottom right. Mostly this is the right decision for assistive technology to make. This is the order anyone not using assistive technology would experience your app. Sometimes though, we make designs that don’t read in this way. By using the .accessibility(sortPriority: ) modifier we can set the order in which assistive technology accesses elements.
SwiftUI Accessibility - Named Controls
One big accessibility improvement in SwiftUI comes in the form of named controls. Nearly all controls and some non-interactive views (see Images) can take a Text view as part of their view builder. The purpose of this is to tie the meaning to the control. Toggle(isOn: $updates) { Text("Send me updates") } Imagine a UIKit layout with a UISwitch control. We’d most likely right align the switch, and provide a text label to the left.
SwiftUI Accessibility: Dynamic Type
Like all accessibility features, Dynamic Type is about customisability. Many of your customers, and maybe even you, are using Dynamic Type without even considering it an accessibility feature. Dynamic type allows iOS users to set the text to a size that they find comfortable to read. This may mean making it a little larger so it’s easier to read for those of us who haven’t yet accepted we might need glasses.
SwiftUI Accessibility: Images
Images in SwiftUI are accessible by default. This is the opposite of what we’d experience in UIKit, where images are not accessible unless you set isAccessibilityElement to true. Sometimes making images not accessible to VoiceOver is the right decision. Like when using a glyph as a redundant way of conveying meaning alongside text. An example of this would be displaying a warning triangle next to the text ‘Error’ or a tick next to ‘success’.
Baking Digital Inclusion Into Your Mobile Apps
I was asked by Capital One to contribute an accessibility piece to the Capital One Tech Medium. The blog, titled Baking Digital Inclusion Into Your Mobile Apps, briefly covers what we mean by disability and what we can do to make our mobile apps work better for everyone.
What The European Accessibility Act (Might) Mean for Mobile Development
The European Accessibility Act, or EAA is due to become law in Europe later this year, and it defines some specific requirements for mobile. In fact, its the first accessibility legislation that I’m aware of, anywhere, that explicitly covers mobile apps. Since 2012 the European Union has been working on standardising accessibility legislation across Europe. The ultimate aim is to both improve the experience for those who need to use assistive technology, but also to simplify the rules business need to follow on accessibility.
Building with nightly Swift toolchains on macOS
The Swift website provides nightly builds of the Swift compiler (called toolchains) for download. Building with a nightly compiler can be useful if you want to check if a bug has already been fixed on main, or if you want to experiment with upcoming language features such as Embedded Swift, as I’ve been doing lately. A toolchain is distributed as a .pkg installer that installs itself into /Library/Developer/Toolchains (or the equivalent path in your home directory). After installation, you have several options to select the toolchain you want to build with: In Xcode In Xcode, select the toolchain from the main menu (Xcode > Toolchains), then build and/or run your code normally. Not all Xcode features work with a custom toolchain. For example, playgrounds don’t work, and Xcode will always use its built-in copy of the Swift Package Manager, so you won’t be able to use unreleased SwiftPM features in this way. Also, Apple won’t accept apps built with a non-standard toolchain for submission to the App Store. On the command line When building on the command line there are multiple options, depending on your preferences and what tool you want to use. The TOOLCHAINS environment variable All of the various Swift build tools respect the TOOLCHAINS environment variable. This should be set to the desired toolchain’s bundle ID, which you can find in the Info.plist file in the toolchain’s directory. Example (I’m using a nightly toolchain from 2024-03-03 here): # My normal Swift version is 5.10 $ swift --version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) # Make sure xcode-select points to Xcode, not to /Library/Developer/CommandLineTools # The Command Line Tools will ignore the TOOLCHAINS variable. $ xcode-select --print-path /Applications/Xcode.app/Contents/Developer # The nightly toolchain is 6.0-dev $ export TOOLCHAINS=org.swift.59202403031a $ swift --version Apple Swift version 6.0-dev (LLVM 0c7823cab15dec9, Swift 0cc05909334c6f7) Toolchain name vs. bundle ID I think the TOOLCHAINS variable is also supposed to accept the toolchain’s name instead of the bundle ID, but this doesn’t work reliably for me. I tried passing: the DisplayName from Info.plist (“Swift Development Snapshot 2024-03-03 (a)”), the ShortDisplayName (“Swift Development Snapshot”; not unique if you have more than one toolchain installed!), the directory name, both with and without the .xctoolchain suffix, but none of them worked reliably, especially if you have multiple toolchains installed. In my limited testing, it seems that Swift picks the first toolchain that matches the short name prefix (“Swift Development Snapshot”) and ignores the long name components. For example, when I select “Swift Development Snapshot 2024-03-03 (a)”, Swift picks swift-DEVELOPMENT-SNAPSHOT-2024-01-30-a, presumably because that’s the “first” one (in alphabetical order) I have installed. My advice: stick to the bundle ID, it works. Here’s a useful command to find the bundle ID of the latest toolchain you have installed (you may have to adjust the path if you install your toolchains in ~/Library instead of /Library): $ plutil -extract CFBundleIdentifier raw /Library/Developer/Toolchains/swift-latest.xctoolchain/Info.plist org.swift.59202403031 # Set the toolchain to the latest installed: export TOOLCHAINS=$(plutil -extract CFBundleIdentifier raw /Library/Developer/Toolchains/swift-latest.xctoolchain/Info.plist) xcrun and xcodebuild xcrun and xcodebuild respect the TOOLCHAINS variable too. As an alternative, they also provide an equivalent command line parameter named --toolchain. The parameter has the same semantics as the environment variable: you pass the toolchain’s bundle ID. Example: $ xcrun --toolchain org.swift.59202403031a --find swiftc /Library/Developer/Toolchains/swift-DEVELOPMENT-SNAPSHOT-2024-03-03-a.xctoolchain/usr/bin/swiftc Swift Package Manager SwiftPM also respects the TOOLCHAINS variable, and it has a --toolchains parameter as well, but this one expects the path to the toolchain, not its bundle ID. Example: $ swift build --toolchain /Library/Developer/Toolchains/swift-latest.xctoolchain Missing toolchains are (silently) ignored Another thing to be aware of: if you specify a toolchain that isn’t installed (e.g. because of a typo or because you’re trying to run a script that was developed in a different environment), none of the tools will fail: swift, xcrun, and xcodebuild silently ignore the toolchain setting and use the default Swift toolchain (set via xcode-select). SwiftPM silently ignores a missing toolchain set via TOOLCHAINS. If you pass an invalid directory to the --toolchains parameter, it at least prints a warning before it continues building with the default toolchain. I don’t like this. I’d much rather get an error if the build tool can’t find the toolchain I told it to use. It’s especially dangerous in scripts.
How the Swift compiler knows that DispatchQueue.main implies @MainActor
You may have noticed that the Swift compiler automatically treats the closure of a DispatchQueue.main.async call as @MainActor. In other words, we can call a main-actor-isolated function in the closure: import Dispatch @MainActor func mainActorFunc() { } DispatchQueue.main.async { // The compiler lets us call this because // it knows we're on the main actor. mainActorFunc() } This behavior is welcome and very convenient, but it bugs me that it’s so hidden. As far as I know it isn’t documented, and neither Xcode nor any other editor/IDE I’ve used do a good job of showing me the actor context a function or closure will run in, even though the compiler has this information. I’ve written about a similar case before in Where View.task gets its main-actor isolation from, where Swift/Xcode hide essential information from the programmer by not showing certain attributes in declarations or the documentation. It’s a syntax check So how is the magic behavior for DispatchQueue.main.async implemented? It can’t be an attribute or other annotation on the closure parameter of the DispatchQueue.async method because the actual queue instance isn’t known at that point. A bit of experimentation reveals that it is in fact a relatively coarse source-code-based check that singles out invocations on DispatchQueue.main, in exactly that spelling. For example, the following variations do produce warnings/errors (in Swift 5.10/6.0, respectively), even though they are just as safe as the previous code snippet. This is because we aren’t using the “correct” DispatchQueue.main.async spelling: let queue = DispatchQueue.main queue.async { // Error: Call to main actor-isolated global function // 'mainActorFunc()' in a synchronous nonisolated context mainActorFunc() // ❌ } typealias DP = DispatchQueue DP.main.async { // Error: Call to main actor-isolated global function // 'mainActorFunc()' in a synchronous nonisolated context mainActorFunc() // ❌ } I found the place in the Swift compiler source code where the check happens. In the compiler’s semantic analysis stage (called “Sema”; this is the phase right after parsing), the type checker calls a function named adjustFunctionTypeForConcurrency, passing in a Boolean it obtained from isMainDispatchQueueMember, which returns true if the source code literally references DispatchQueue.main. In that case, the type checker adds the @_unsafeMainActor attribute to the function type. Good to know. Fun fact: since this is a purely syntax-based check, if you define your own type named DispatchQueue, give it a static main property and a function named async that takes a closure, the compiler will apply the same “fix” to it. This is NOT recommended: // Define our own `DispatchQueue.main.async` struct DispatchQueue { static let main: Self = .init() func async(_ work: @escaping () -> Void) {} } // This calls our DispatchQueue.main.async { // No error! Compiler has inserted `@_unsafeMainActor` mainActorFunc() } Perplexity through obscurity I love that this automatic @MainActor inference for DispatchQueue.main exists. I do not love that it’s another piece of hidden, implicit behavior that makes Swift concurrency harder to learn. I want to see all the @_unsafeMainActor and @_unsafeInheritExecutor and @_inheritActorContext annotations! I believe Apple is doing the community a disservice by hiding these in Xcode. The biggest benefit of Swift’s concurrency model over what we had before is that so many things are statically known at compile time. It’s a shame that the compiler knows on which executor a particular line of code will run, but none of the tools seem to be able to show me this. Instead, I’m forced to hunt for @MainActor annotations and hidden attributes in superclasses, protocols, etc. This feels especially problematic during the Swift 5-to-6 transition phase we’re currently in where it’s so easy to misuse concurrency and not get a compiler error (and sometimes not even a warning if you forget to enable strict concurrency checking). The most impactful change Apple can make to make Swift concurrency less confusing is to show the inferred executor context for each line of code in Xcode. Make it really obvious what code runs on the main actor, some other actor, or the global cooperative pool. Use colors or whatnot! (Other Swift IDEs should do this too, of course. I’m just picking on Xcode because Apple has the most leverage.)
How the relative size modifier interacts with stack views
And what it can teach us about SwiftUI’s stack layout algorithm I have one more thing to say on the relative sizing view modifier from my previous post, Working with percentages in SwiftUI layout. I’m assuming you’ve read that article. The following is good to know if you want to use the modifier in your own code, but I hope you’ll also learn some general tidbits about SwiftUI’s layout algorithm for HStacks and VStacks. Using relative sizing inside a stack view Let’s apply the relativeProposed modifier to one of the subviews of an HStack: HStack(spacing: 10) { Color.blue .relativeProposed(width: 0.5) Color.green Color.yellow } .border(.primary) .frame(height: 80) What do you expect to happen here? Will the blue view take up 50 % of the available width? The answer is no. In fact, the blue rectangle becomes narrower than the others: This is because the HStack only proposes a proportion of its available width to each of its children. Here, the stack proposes one third of the available space to its first child, the relative sizing modifier. The modifier then halves this value, resulting in one sixth of the total width (minus spacing) for the blue color. The other two rectangles then become wider than one third because the first child view didn’t use up its full proposed width. Update May 1, 2024: SwiftUI’s built-in containerRelativeFrame modifier (introduced after I wrote my modifier) doesn’t exhibit this behavior because it uses the size of the nearest container view as its reference, and stack views don’t count as containers in this context (which I find somewhat unintuitive, but that’s the way it is). Order matters Now let’s move the modifier to the green color in the middle: HStack(spacing: 10) { Color.blue Color.green .relativeProposed(width: 0.5) Color.yellow } Naively, I’d expect an equivalent result: the green rectangle should become 100 pt wide, and blue and yellow should be 250 pt each. But that’s not what happens — the yellow view ends up being wider than the blue one: I found this unintuitive at first, but it makes sense if you understand that the HStack processes its children in sequence: The HStack proposes one third of its available space to the blue view: (620 – 20) / 3 = 200. The blue view accepts the proposal and becomes 200 pt wide. Next up is the relativeProposed modifier. The HStack divides the remaining space by the number of remaining subviews and proposes that: 400 / 2 = 200. Our modifier halves this proposal and proposes 100 pt to the green view, which accepts it. The modifier in turn adopts the size of its child and returns 100 pt to the HStack. Since the second subview used less space than proposed, the HStack now has 300 pt left over to propose to its final child, the yellow color. Important: the order in which the stack lays out its subviews happens to be from left to right in this example, but that’s not always the case. In general, HStacks and VStacks first group their subviews by layout priority (more on that below), and then order the views inside each group by flexibility such that the least flexible views are laid out first. For more on this, see How an HStack Lays out Its Children by Chris Eidhof. The views in our example are all equally flexible (they all can become any width between 0 and infinity), so the stack processes them in their “natural” order. Leftover space isn’t redistributed By now you may be able guess how the layout turns out when we move our view modifier to the last child view: HStack(spacing: 10) { Color.blue Color.green Color.yellow .relativeProposed(width: 0.5) } Blue and green each receive one third of the available width and become 200 pt wide. No surprises there. When the HStack reaches the relativeProposed modifier, it has 200 pt left to distribute. Again, the modifier and the yellow rectangle only use half of this amount. The end result is that the HStack ends up with 100 pt left over. The process stops here — the HStack does not start over in an attempt to find a “better” solution. The stack makes itself just big enough to contain its subviews (= 520 pt incl. spacing) and reports that size to its parent. Layout priority We can use the layoutPriority view modifier to influence how stacks and other containers lay out their children. Let’s give the subview with the relative sizing modifier a higher layout priority (the default priority is 0): HStack(spacing: 10) { Color.blue Color.green Color.yellow .relativeProposed(width: 0.5) .layoutPriority(1) } This results in a layout where the yellow rectangle actually takes up 50 % of the available space: Explanation: The HStack groups its children by layout priority and then processes each group in sequence, from highest to lowest priority. Each group is proposed the entire remaining space. The first layout group only contains a single view, our relative sizing modifier with the yellow color. The HStack proposes the entire available space (minus spacing) = 600 pt. Our modifier halves the proposal, resulting in 300 pt for the yellow view. There are 300 pt left over for the second layout group. These are distributed equally among the two children because each subview accepts the proposed size. Conclusion The code I used to generate the images in this article is available on GitHub. I only looked at HStacks here, but VStacks work in exactly the same way for the vertical dimension. SwiftUI’s layout algorithm always follows this basic pattern of proposed sizes and responses. Each of the built-in “primitive” views (e.g. fixed and flexible frames, stacks, Text, Image, Spacer, shapes, padding, background, overlay) has a well-defined (if not always well-documented) layout behavior that can be expressed as a function (ProposedViewSize) -> CGSize. You’ll need to learn the behavior for view to work effectively with SwiftUI. A concrete lesson I’m taking away from this analysis: HStack and VStack don’t treat layout as an optimization problem that tries to find the optimal solution for a set of constraints (autolayout style). Rather, they sort their children in a particular way and then do a single proposal-and-response pass over them. If there’s space leftover at the end, or if the available space isn’t enough, then so be it.
Working with percentages in SwiftUI layout
SwiftUI’s layout primitives generally don’t provide relative sizing options, e.g. “make this view 50 % of the width of its container”. Let’s build our own! Use case: chat bubbles Consider this chat conversation view as an example of what I want to build. The chat bubbles always remain 80 % as wide as their container as the view is resized: The chat bubbles should become 80 % as wide as their container. Download video Building a proportional sizing modifier 1. The Layout We can build our own relative sizing modifier on top of the Layout protocol. The layout multiplies its own proposed size (which it receives from its parent view) with the given factors for width and height. It then proposes this modified size to its only subview. Here’s the implementation (the full code, including the demo app, is on GitHub): /// A custom layout that proposes a percentage of its /// received proposed size to its subview. /// /// - Precondition: must contain exactly one subview. fileprivate struct RelativeSizeLayout: Layout { var relativeWidth: Double var relativeHeight: Double func sizeThatFits( proposal: ProposedViewSize, subviews: Subviews, cache: inout () ) -> CGSize { assert(subviews.count == 1, "expects a single subview") let resizedProposal = ProposedViewSize( width: proposal.width.map { $0 * relativeWidth }, height: proposal.height.map { $0 * relativeHeight } ) return subviews[0].sizeThatFits(resizedProposal) } func placeSubviews( in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout () ) { assert(subviews.count == 1, "expects a single subview") let resizedProposal = ProposedViewSize( width: proposal.width.map { $0 * relativeWidth }, height: proposal.height.map { $0 * relativeHeight } ) subviews[0].place( at: CGPoint(x: bounds.midX, y: bounds.midY), anchor: .center, proposal: resizedProposal ) } } Notes: I made the type private because I want to control how it can be used. This is important for maintaining the assumption that the layout only ever has a single subview (which makes the math much simpler). Proposed sizes in SwiftUI can be nil or infinity in either dimension. Our layout passes these special values through unchanged (infinity times a percentage is still infinity). I’ll discuss below what implications this has for users of the layout. 2. The View extension Next, we’ll add an extension on View that uses the layout we just wrote. This becomes our public API: extension View { /// Proposes a percentage of its received proposed size to `self`. public func relativeProposed(width: Double = 1, height: Double = 1) -> some View { RelativeSizeLayout(relativeWidth: width, relativeHeight: height) { // Wrap content view in a container to make sure the layout only // receives a single subview. Because views are lists! VStack { // alternatively: `_UnaryViewAdaptor(self)` self } } } } Notes: I decided to go with a verbose name, relativeProposed(width:height:), to make the semantics clear: we’re changing the proposed size for the subview, which won’t always result in a different actual size. More on this below. We’re wrapping the subview (self in the code above) in a VStack. This might seem redundant, but it’s necessary to make sure the layout only receives a single element in its subviews collection. See Chris Eidhof’s SwiftUI Views are Lists for an explanation. Usage The layout code for a single chat bubble in the demo video above looks like this: let alignment: Alignment = message.sender == .me ? .trailing : .leading chatBubble .relativeProposed(width: 0.8) .frame(maxWidth: .infinity, alignment: alignment) The outermost flexible frame with maxWidth: .infinity is responsible for positioning the chat bubble with leading or trailing alignment, depending on who’s speaking. You can even add another frame that limits the width to a maximum, say 400 points: let alignment: Alignment = message.sender == .me ? .trailing : .leading chatBubble .frame(maxWidth: 400) .relativeProposed(width: 0.8) .frame(maxWidth: .infinity, alignment: alignment) Here, our relative sizing modifier only has an effect as the bubbles become narrower than 400 points. In a wider window the width-limiting frame takes precedence. I like how composable this is! Download video 80 % won’t always result in 80 % If you watch the debugging guides I’m drawing in the video above, you’ll notice that the relative sizing modifier never reports a width greater than 400, even if the window is wide enough: The relative sizing modifier accepts the actual size of its subview as its own size. This is because our layout only adjusts the proposed size for its subview but then accepts the subview’s actual size as its own. Since SwiftUI views always choose their own size (which the parent can’t override), the subview is free to ignore our proposal. In this example, the layout’s subview is the frame(maxWidth: 400) view, which sets its own width to the proposed width or 400, whichever is smaller. Understanding the modifier’s behavior Proposed size ≠ actual size It’s important to internalize that the modifier works on the basis of proposed sizes. This means it depends on the cooperation of its subview to achieve its goal: views that ignore their proposed size will be unaffected by our modifier. I don’t find this particularly problematic because SwiftUI’s entire layout system works like this. Ultimately, SwiftUI views always determine their own size, so you can’t write a modifier that “does the right thing” (whatever that is) for an arbitrary subview hierarchy. nil and infinity I already mentioned another thing to be aware of: if the parent of the relative sizing modifier proposes nil or .infinity, the modifier will pass the proposal through unchanged. Again, I don’t think this is particularly bad, but it’s something to be aware of. Proposing nil is SwiftUI’s way of telling a view to become its ideal size (fixedSize does this). Would you ever want to tell a view to become, say, 50 % of its ideal width? I’m not sure. Maybe it’d make sense for resizable images and similar views. By the way, you could modify the layout to do something like this: If the proposal is nil or infinity, forward it to the subview unchanged. Take the reported size of the subview as the new basis and apply the scaling factors to that size (this still breaks down if the child returns infinity). Now propose the scaled size to the subview. The subview might respond with a different actual size. Return this latest reported size as your own size. This process of sending multiple proposals to child views is called probing. Lots of built-in containers views do this too, e.g. VStack and HStack. Nesting in other container views The relative sizing modifier interacts in an interesting way with stack views and other containers that distribute the available space among their children. I thought this was such an interesting topic that I wrote a separate article about it: How the relative size modifier interacts with stack views. The code The complete code is available in a Gist on GitHub. Digression: Proportional sizing in early SwiftUI betas The very first SwiftUI betas in 2019 did include proportional sizing modifiers, but they were taken out before the final release. Chris Eidhof preserved a copy of SwiftUI’s “header file” from that time that shows their API, including quite lengthy documentation. I don’t know why these modifiers didn’t survive the beta phase. The release notes from 2019 don’t give a reason: The relativeWidth(_:), relativeHeight(_:), and relativeSize(width:height:) modifiers are deprecated. Use other modifiers like frame(minWidth:idealWidth:maxWidth:minHeight:idealHeight:maxHeight:alignment:) instead. (51494692) I also don’t remember how these modifiers worked. They probably had somewhat similar semantics to my solution, but I can’t be sure. The doc comments linked above sound straightforward (“Sets the width of this view to the specified proportion of its parent’s width.”), but they don’t mention the intricacies of the layout algorithm (proposals and responses) at all. containerRelativeFrame Update May 1, 2024: Apple introduced the containerRelativeFrame modifier for its 2023 OSes (iOS 17/macOS 14). If your deployment target permits it, this can be a good, built-in alternative. Note that containerRelativeFrame behaves differently than my relativeProposed modifier as it computes the size relative to the nearest container view, whereas my modifier uses its proposed size as the reference. The SwiftUI documentation somewhat vaguely lists the views that count as a container for containerRelativeFrame. Notably, stack views don’t count! Check out Jordan Morgan’s article Modifier Monday: .containerRelativeFrame(_ axes:) (2022-06-26) to learn more about containerRelativeFrame.
Keyboard shortcuts for Export Unmodified Original in Photos for Mac
Problem The Photos app on macOS doesn’t provide a keyboard shortcut for the Export Unmodified Original command. macOS allows you to add your own app-specific keyboard shortcuts via System Settings > Keyboard > Keyboard Shortcuts > App Shortcuts. You need to enter the exact spelling of the menu item you want to invoke. Photos renames the command depending on what’s selected: Export Unmodified Original For 1 Photo“ turns into ”… Originals For 2 Videos” turns into “… For 3 Items” (for mixed selections), and so on. Argh! The System Settings UI for assigning keyboard shortcuts is extremely tedious to use if you want to add more than one or two shortcuts. Dynamically renaming menu commands is cute, but it becomes a problem when you want to assign keyboard shortcuts. Solution: shell script Here’s a Bash script1 that assigns Ctrl + Opt + Cmd + E to Export Unmodified Originals for up to 20 selected items: #!/bin/bash # Assigns a keyboard shortcut to the Export Unmodified Originals # menu command in Photos.app on macOS. # @ = Command # ^ = Control # ~ = Option # $ = Shift shortcut='@~^e' # Set shortcut for 1 selected item echo "Setting shortcut for 1 item" defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Photo" "$shortcut" defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Video" "$shortcut" # Set shortcut for 2-20 selected items objects=(Photos Videos Items) for i in {2..20} do echo "Setting shortcut for $i items" for object in "${objects[@]}" do defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Originals For $i $object" "$shortcut" done done # Use this command to verify the result: # defaults read com.apple.Photos NSUserKeyEquivalents The script is also available on GitHub. Usage: Quit Photos.app. Run the script. Feel free to change the key combo or count higher than 20. Open Photos.app. Note: There’s a bug in Photos.app on macOS 13.2 (and at least some earlier versions). Custom keyboard shortcuts don’t work until you’ve opened the menu of the respective command at least once. So you must manually open the File > Export once before the shortcut will work. (For Apple folks: FB11967573.) I still write Bash scripts because Shellcheck doesn’t support Zsh. ↩︎
Swift Evolution proposals in Alfred
I rarely participate actively in the Swift Evolution process, but I frequently refer to evolution proposals for my work, often multiple times per week. The proposals aren’t always easy to read, but they’re the most comprehensive (and sometimes only) documentation we have for many Swift features. For years, my tool of choice for searching Swift Evolution proposals has been Karoy Lorentey’s swift-evolution workflow for Alfred. The workflow broke recently due to data format changes. Karoy was kind enough to add me as a maintainer so I could fix it. The new version 2.1.0 is now available on GitHub. Download the .alfredworkflow file and double-click to install. Besides the fix, the update has a few other improvements: The proposal title is now displayed more prominently. New actions to copy the proposal title (hold down Command) or copy it as a Markdown link (hold down Shift + Command). The script forwards the main metadata of the selected proposal (id, title, status, URL) to Alfred. If you want to extend the workflow with your own actions, you can refer to these variables.
Pattern matching on error codes
Foundation overloads the pattern matching operator ~= to enable matching against error codes in catch clauses. catch clauses in Swift support pattern matching, using the same patterns you’d use in a case clause inside a switch or in an if case … statement. For example, to handle a file-not-found error you might write: import Foundation do { let fileURL = URL(filePath: "/abc") // non-existent file let data = try Data(contentsOf: fileURL) } catch let error as CocoaError where error.code == .fileReadNoSuchFile { print("File doesn't exist") } catch { print("Other error: \(error)") } This binds a value of type CocoaError to the variable error and then uses a where clause to check the specific error code. However, if you don’t need access to the complete error instance, there’s a shorter way to write this, matching directly against the error code: let data = try Data(contentsOf: fileURL) - } catch let error as CocoaError where error.code == .fileReadNoSuchFile { + } catch CocoaError.fileReadNoSuchFile { print("File doesn't exist") Foundation overloads ~= I was wondering why this shorter syntax works. Is there some special compiler magic for pattern matching against error codes of NSError instances? Turns out: no, the answer is much simpler. Foundation includes an overload for the pattern matching operator ~= that matches error values against error codes.1 The implementation looks something like this: public func ~= (code: CocoaError.Code, error: any Error) -> Bool { guard let error = error as? CocoaError else { return false } return error.code == code } The actual code in Foundation is a little more complex because it goes through a hidden protocol named _ErrorCodeProtocol, but that’s not important. You can check out the code in the Foundation repository: Darwin version, swift-corelibs-foundation version. This matching on error codes is available for CocoaError, URLError, POSIXError, and MachError (and possibly more types in other Apple frameworks, I haven’t checked). I wrote about the ~= operator before, way back in 2015(!): Pattern matching in Swift and More pattern matching examples. ↩︎
You should watch Double Fine Adventure
I know I’m almost a decade late to this party, but I’m probably not the only one, so here goes. Double Fine Adventure was a wildly successful 2012 Kickstarter project to crowdfund the development of a point-and-click adventure game and, crucially, to document its development on video. The resulting game Broken Age was eventually released in two parts in 2014 and 2015. Broken Age is a beautiful game and I recommend you try it. It’s available for lots of platforms and is pretty cheap (10–15 euros/dollars or less). I played it on the Nintendo Switch, which worked very well. Broken Age. But the real gem to me was watching the 12.5-hour documentary on YouTube. A video production team followed the entire three-year development process from start to finish. It provides a refreshingly candid and transparent insight into “how the sausage is made”, including sensitive topics such as financial problems, layoffs, and long work hours. Throughout all the ups and downs there’s a wonderful sense of fun and camaraderie among the team at Double Fine, which made watching the documentary even more enjoyable to me than playing Broken Age. You can tell these people love working with each other. I highly recommend taking a look if you find this mildly interesting. The Double Fine Adventure documentary. The first major game spoilers don’t come until episode 15, so you can safely watch most of the documentary before playing the game (and this is how the original Kickstarter backers experienced it). However, I think it’s even more interesting to play the game first, or to experience both side-by-side. My suggestion: watch two or three episodes of the documentary. If you like it, start playing Broken Age alongside it.
Understanding SwiftUI view lifecycles
I wrote an app called SwiftUI View Lifecycle. The app allows you to observe how different SwiftUI constructs and containers affect a view’s lifecycle, including the lifetime of its state and when onAppear gets called. The code for the app is on GitHub. It can be built for iOS and macOS. The view tree and the render tree When we write SwiftUI code, we construct a view tree that consists of nested view values. Instances of the view tree are ephemeral: SwiftUI constantly destroys and recreates (parts of) the view tree as it processes state changes. The view tree serves as a blueprint from which SwiftUI creates a second tree, which represents the actual view “objects” that are “on screen” at any given time (the “objects” could be actual UIView or NSView objects, but also other representations; the exact meaning of “on screen” can vary depending on context). Chris Eidhof likes to call this second tree the render tree (the link points to a 3 minute video where Chris demonstrates this duality, highly recommended). The render tree persists across state changes and is used by SwiftUI to establish view identity. When a state change causes a change in a view’s value, SwiftUI will find the corresponding view object in the render tree and update it in place, rather than recreating a new view object from scratch. This is of course key to making SwiftUI efficient, but the render tree has another important function: it controls the lifetimes of views and their state. View lifecycles and state We can define a view’s lifetime as the timespan it exists in the render tree. The lifetime begins with the insertion into the render tree and ends with the removal. Importantly, the lifetime extends to view state defined with @State and @StateObject: when a view gets removed from the render tree, its state is lost; when the view gets inserted again later, the state will be recreated with its initial value. The SwiftUI View Lifecycle app tracks three lifecycle events for a view and displays them as timestamps: @State = when the view’s state was created (equivalent to the start of the view’s lifetime) onAppear = when onAppear was last called onDisappear = when onDisappear was last called The lifecycle monitor view displays the timestamps when certain lifecycle events last occurred. The app allows you to observe these events in different contexts. As you click your way through the examples, you’ll notice that the timing of these events changes depending on the context a view is embedded in. For example: An if/else statement creates and destroys its child views every time the condition changes; state is not preserved. A ScrollView eagerly inserts all of its children into the render tree, regardless of whether they’re inside the viewport or not. All children appear right away and never disappear. A List with dynamic content (using ForEach) lazily inserts only the child views that are currently visible. But once a child view’s lifetime has started, the list will keep its state alive even when it gets scrolled offscreen again. onAppear and onDisappear get called repeatedly as views are scrolled into and out of the viewport. A NavigationStack calls onAppear and onDisappear as views are pushed and popped. State for parent levels in the stack is preserved when a child view is pushed. A TabView starts the lifetime of all child views right away, even the non-visible tabs. onAppear and onDisappear get called repeatedly as the user switches tabs, but the tab view keeps the state alive for all tabs. Lessons Here are a few lessons to take away from this: Different container views may have different performance and memory usage behaviors, depending on how long they keep child views alive. onAppear isn’t necessarily called when the state is created. It can happen later (but never earlier). onAppear can be called multiple times in some container views. If you need a side effect to happen exactly once in a view’s lifetime, consider writing yourself an onFirstAppear helper, as shown by Ian Keen and Jordan Morgan in Running Code Only Once in SwiftUI (2022-11-01). I’m sure you’ll find more interesting tidbits when you play with the app. Feedback is welcome!
clipped() doesn’t affect hit testing
The clipped() modifier in SwiftUI clips a view to its bounds, hiding any out-of-bounds content. But note that clipping doesn’t affect hit testing; the clipped view can still receive taps/clicks outside the visible area. I tested this on iOS 16.1 and macOS 13.0. Example Here’s a 300×300 square, which we then constrain to a 100×100 frame. I also added a border around the outer frame to visualize the views: Rectangle() .fill(.orange.gradient) .frame(width: 300, height: 300) // Set view to 100×100 → renders out of bounds .frame(width: 100, height: 100) .border(.blue) SwiftUI views don’t clip their content by default, hence the full 300×300 square remains visible. Notice the blue border that indicates the 100×100 outer frame: Now let’s add .clipped() to clip the large square to the 100×100 frame. I also made the square tappable and added a button: VStack { Button("You can't tap me!") { buttonTapCount += 1 } .buttonStyle(.borderedProminent) Rectangle() .fill(.orange.gradient) .frame(width: 300, height: 300) .frame(width: 100, height: 100) .clipped() .onTapGesture { rectTapCount += 1 } } When you run this code, you’ll discover that the button isn’t tappable at all. This is because the (unclipped) square, despite not being fully visible, obscures the button and “steals” all taps. The dashed outline indicates the hit area of the orange square. The button isn’t tappable because it’s covered by the clipped view with respect to hit testing. The fix: .contentShape() The contentShape(_:) modifier defines the hit testing area for a view. By adding .contentShape(Rectangle()) to the 100×100 frame, we limit hit testing to that area, making the button tappable again: Rectangle() .fill(.orange.gradient) .frame(width: 300, height: 300) .frame(width: 100, height: 100) .contentShape(Rectangle()) .clipped() Note that the order of .contentShape(Rectangle()) and .clipped() could be swapped. The important thing is that contentShape is an (indirect) parent of the 100×100 frame modifier that defines the size of the hit testing area. Video demo I made a short video that demonstrates the effect: Initially, taps on the button, or even on the surrounding whitespace, register as taps on the square. The top switch toggles display of the square before clipping. This illustrates its hit testing area. The second switch adds .contentShape(Rectangle()) to limit hit testing to the visible area. Now tapping the button increments the button’s tap count. The full code for this demo is available on GitHub. Download video Summary The clipped() modifier doesn’t affect the clipped view’s hit testing region. The same is true for clipShape(_:). It’s often a good idea to combine these modifiers with .contentShape(Rectangle()) to bring the hit testing logic in sync with the UI.
When .animation animates more (or less) than it’s supposed to
On the positioning of the .animation modifier in the view tree, or: “Rendering” vs. “non-rendering” view modifiers The documentation for SwiftUI’s animation modifier says: Applies the given animation to this view when the specified value changes. This sounds unambiguous to me: it sets the animation for “this view”, i.e. the part of the view tree that .animation is being applied to. This should give us complete control over which modifiers we want to animate, right? Unfortunately, it’s not that simple: it’s easy to run into situations where a view change inside an animated subtree doesn’t get animated, or vice versa. Unsurprising examples Let me give you some examples, starting with those that do work as documented. I tested all examples on iOS 16.1 and macOS 13.0. 1. Sibling views can have different animations Independent subtrees of the view tree can be animated independently. In this example we have three sibling views, two of which are animated with different durations, and one that isn’t animated at all: struct Example1: View { var flag: Bool var body: some View { HStack(spacing: 40) { Rectangle() .frame(width: 80, height: 80) .foregroundColor(.green) .scaleEffect(flag ? 1 : 1.5) .animation(.easeOut(duration: 0.5), value: flag) Rectangle() .frame(width: 80, height: 80) .foregroundColor(flag ? .yellow : .red) .rotationEffect(flag ? .zero : .degrees(45)) .animation(.easeOut(duration: 2.0), value: flag) Rectangle() .frame(width: 80, height: 80) .foregroundColor(flag ? .pink : .mint) } } } The two animation modifiers each apply to their own subtree. They don’t interfere with each other and have no effect on the rest of the view hierarchy: Download video 2. Nested animation modifiers When two animation modifiers are nested in a single view tree such that one is an indirect parent of the other, the inner modifier can override the outer animation for its subviews. The outer animation applies to view modifiers that are placed between the two animation modifiers. In this example we have one rectangle view with animated scale and rotation effects. The outer animation applies to the entire subtree, including both effects. The inner animation modifier overrides the outer animation only for what’s nested below it in the view tree, i.e. the scale effect: struct Example2: View { var flag: Bool var body: some View { Rectangle() .frame(width: 80, height: 80) .foregroundColor(.green) .scaleEffect(flag ? 1 : 1.5) .animation(.default, value: flag) // inner .rotationEffect(flag ? .zero : .degrees(45)) .animation(.default.speed(0.3), value: flag) // outer } } As a result, the scale and rotation changes animate at different speeds: Download video Note that we can also pass .animation(nil, value: flag) to selectively disable animations for a subtree, overriding a non-nil animation further up the view tree. 3. animation only animates its children (with exceptions) As a general rule, the animation modifier only applies to its subviews. In other words, views and modifiers that are direct or indirect parents of an animation modifier should not be animated. As we’ll see below, it doesn’t always work like that, but here’s an example where it does. This is a slight variation of the previous code snippet where I removed the outer animation modifier (and changed the color for good measure): struct Example3: View { var flag: Bool var body: some View { Rectangle() .frame(width: 80, height: 80) .foregroundColor(.orange) .scaleEffect(flag ? 1 : 1.5) .animation(.default, value: flag) // Don't animate the rotation .rotationEffect(flag ? .zero : .degrees(45)) } } Recall that the order in which view modifiers are written in code is inverted with respect to the actual view tree hierarchy. Each view modifier is a new view that wraps the view it’s being applied to. So in our example, the scale effect is the child of the animation modifier, whereas the rotation effect is its parent. Accordingly, only the scale change gets animated: Download video Surprising examples Now it’s time for the “fun” part. It turns out not all view modifiers behave as intuitively as scaleEffect and rotationEffect when combined with the animation modifier. 4. Some modifiers don’t respect the rules In this example we’re changing the color, size, and alignment of the rectangle. Only the size change should be animated, which is why we’ve placed the alignment and color mutations outside the animation modifier: struct Example4: View { var flag: Bool var body: some View { let size: CGFloat = flag ? 80 : 120 Rectangle() .frame(width: size, height: size) .animation(.default, value: flag) .frame(maxWidth: .infinity, alignment: flag ? .leading : .trailing) .foregroundColor(flag ? .pink : .indigo) } } Unfortunately, this doesn’t work as intended, as all three changes are animated: Download video It behaves as if the animation modifier were the outermost element of this view subtree. 5. padding and border This one’s sort of the inverse of the previous example because a change we want to animate doesn’t get animated. The padding is a child of the animation modifier, so I’d expect changes to it to be animated, i.e. the border should grow and shrink smoothly: struct Example5: View { var flag: Bool var body: some View { Rectangle() .frame(width: 80, height: 80) .padding(flag ? 20 : 40) .animation(.default, value: flag) .border(.primary) .foregroundColor(.cyan) } } But that’s not what happens: Download video 6. Font modifiers Font modifiers also behave seemingly erratic with respect to the animation modifier. In this example, we want to animate the font width, but not the size or weight (smooth text animation is a new feature in iOS 16): struct Example6: View { var flag: Bool var body: some View { Text("Hello!") .fontWidth(flag ? .condensed : .expanded) .animation(.default, value: flag) .font(.system( size: flag ? 40 : 60, weight: flag ? .regular : .heavy) ) } } You guessed it, this doesn’t work as intended. Instead, all text properties animate smoothly: Download video Why does it work like this? In summary, the placement of the animation modifier in the view tree allows some control over which changes get animated, but it isn’t perfect. Some modifiers, such as scaleEffect and rotationEffect, behave as expected, whereas others (frame, padding, foregroundColor, font) are less controllable. I don’t fully understand the rules, but the important factor seems to be if a view modifier actually “renders” something or not. For instance, foregroundColor just writes a color into the environment; the modifier itself doesn’t draw anything. I suppose this is why its position with respect to animation is irrelevant: RoundedRectangle(cornerRadius: flag ? 0 : 40) .animation(.default, value: flag) // Color change still animates, even though we’re outside .animation .foregroundColor(flag ? .pink : .indigo) The rendering presumably takes place on the level of the RoundedRectangle, which reads the color from the environment. At this point the animation modifier is active, so SwiftUI will animate all changes that affect how the rectangle is rendered, regardless of where in the view tree they’re coming from. The same explanation makes intuitive sense for the font modifiers in example 6. The actual rendering, and therefore the animation, occurs on the level of the Text view. The various font modifiers affect how the text is drawn, but they don’t render anything themselves. Similarly, padding and frame (including the frame’s alignment) are “non-rendering” modifiers too. They don’t use the environment, but they influence the layout algorithm, which ultimately affects the size and position of one or more “rendering” views, such as the rectangle in example 4. That rectangle sees a combined change in its geometry, but it can’t tell where the change came from, so it’ll animate the full geometry change. In example 5, the “rendering” view that’s affected by the padding change is the border (which is implemented as a stroked rectangle in an overlay). Since the border is a parent of the animation modifier, its geometry change is not animated. In contrast to frame and padding, scaleEffect and rotationEffect are “rendering” modifiers. They apparently perform the animations themselves. Conclusion SwiftUI views and view modifiers can be divided into “rendering“ and “non-rendering” groups (I wish I had better terms for these). In iOS 16/macOS 13, the placement of the animation modifier with respect to non-rendering modifiers is irrelevant for deciding if a change gets animated or not. Non-rendering modifiers include (non-exhaustive list): Layout modifiers (frame, padding, position, offset) Font modifiers (font, bold, italic, fontWeight, fontWidth) Other modifiers that write data into the environment, e.g. foregroundColor, foregroundStyle, symbolRenderingMode, symbolVariant Rendering modifiers include (non-exhaustive list): clipShape, cornerRadius Geometry effects, e.g. scaleEffect, rotationEffect, projectionEffect Graphical effects, e.g. blur, brightness, hueRotation, opacity, saturation, shadow
Xcode 14.0 generates wrong concurrency code for macOS targets
Mac apps built with Xcode 14.0 and 14.0.1 may contain concurrency bugs because the Swift 5.7 compiler can generate invalid code when targeting the macOS 12.3 SDK. If you distribute Mac apps, you should build them with Xcode 13.4.1 until Xcode 14.1 is released. Here’s what happened: Swift 5.7 implements SE-0338: Clarify the Execution of Non-Actor-Isolated Async Functions, which introduces new rules how async functions hop between executors. Because of SE-0338, when compiling concurrency code, the Swift 5.7 compiler places executor hops in different places than Swift 5.6. Some standard library functions need to opt out of the new rules. They are annotated with a new, unofficial attribute @_unsafeInheritExecutor, which was introduced for this purpose. When the Swift 5.7 compiler sees this attribute, it generates different executor hops. The attribute is only present in the Swift 5.7 standard library, i.e. in the iOS 16 and macOS 13 SDKs. This is fine for iOS because compiler version and the SDK’s standard library version match in Xcode 14.0. But for macOS targets, Xcode 14.0 uses the Swift 5.7 compiler with the standard library from Swift 5.6, which doesn’t contain the @_unsafeInheritExecutor attribute. This is what causes the bugs. Note that the issue is caused purely by the version mismatch at compile-time. The standard library version used by the compiled app at run-time (which depends on the OS version the app runs on) isn’t relevant. As soon as Xcode 14.1 gets released with the macOS 13 SDK, the version mismatch will go away, and Mac targets built with Xcode 14.1 won’t exhibit these bugs. Third-party developers had little chance of discovering the bug during the Xcode 14.0 beta phase because the betas ship with the new beta macOS SDK. The version mismatch occurs when the final Xcode release in September reverts back to the old macOS SDK to accommodate the different release schedules of iOS and macOS. Sources Breaking concurrency invariants is a serious issue, though I’m not sure how much of a problem this is in actual production apps. Here are all related bug reports that I know of: Concurrency is broken in Xcode 14 for macOS (2022-09-14) withUnsafeContinuation can break actor isolation (2022-10-07) And explanations of the cause from John McCall of the Swift team at Apple: John McCall (2022-10-07): This guarantee is unfortunately broken with Xcode 14 when compiling for macOS because it’s shipping with an old macOS SDK that doesn’t declare that withUnsafeContinuation inherits its caller’s execution context. And yes, there is a related actor-isolation issue because of this bug. That will be fixed by the release of the new macOS SDK. John McCall (2022-10-07): Now, there is a bug in Xcode 14 when compiling for the macOS SDK because it ships with an old SDK. That bug doesn’t actually break any of the ordering properties above. It does, however, break Swift’s data isolation guarantees because it causes withUnsafeContinuation, when called from an actor-isolated context, to send a non-Sendable function to a non-isolated executor and then call it, which is completely against the rules. And in fact, if you turn strict sendability checking on when compiling against that SDK, you will get a diagnostic about calling withUnsafeContinuation because it thinks that you’re violating the rules (because withUnsafeContinuation doesn’t properly inherit the execution context of its caller). Poor communication from Apple What bugs me most about the situation is Apple’s poor communication. When the official, current release of your programming language ships with a broken compiler for one of your most important platforms, the least I’d expect is a big red warning at the top of the release notes. I can’t find any mention of this issue in the Xcode 14.0 release notes or Xcode 14.0.1 release notes, however. Even better: the warning should be displayed prominently in Xcode, or Xcode 14.0 should outright refuse to build Mac apps. I’m sure the latter option isn’t practical for all sorts of reasons, although it sounds logical to me: if the only safe compiler/SDK combinations are either 5.6 with the macOS 12 SDK or 5.7 with the macOS 13 SDK, there shouldn’t be an official Xcode version that combines the 5.7 compiler with the macOS 12 SDK.
Where View.task gets its main-actor isolation from
SwiftUI’s .task modifier inherits its actor context from the surrounding function. If you call .task inside a view’s body property, the async operation will run on the main actor because View.body is (semi-secretly) annotated with @MainActor. However, if you call .task from a helper property or function that isn’t @MainActor-annotated, the async operation will run in the cooperative thread pool. Example Here’s an example. Notice the two .task modifiers in body and helperView. The code is identical in both, yet only one of them compiles — in helperView, the call to a main-actor-isolated function fails because we’re not on the main actor in that context: We can call a main-actor-isolated function from inside body, but not from a helper property. import SwiftUI @MainActor func onMainActor() { print("on MainActor") } struct ContentView: View { var body: some View { VStack { helperView Text("in body") .task { // We can call a @MainActor func without await onMainActor() } } } var helperView: some View { Text("in helperView") .task { // ❗️ Error: Expression is 'async' but is not marked with 'await' onMainActor() } } } Why does it work like this? This behavior is caused by two (semi-)hidden annotations in the SwiftUI framework: The View protocol annotates its body property with @MainActor. This transfers to all conforming types. View.task annotates its action parameter with @_inheritActorContext, causing it to adopt the actor context from its use site. Sadly, none of these annotations are visible in the SwiftUI documentation, making it very difficult to understand what’s going on. The @MainActor annotation on View.body is present in Xcode’s generated Swift interface for SwiftUI (Jump to Definition of View), but that feature doesn’t work reliably for me, and as we’ll see, it doesn’t show the whole truth, either. View.body is annotated with @MainActor in Xcode’s generated interface for SwiftUI. SwiftUI’s module interface To really see the declarations the compiler sees, we need to look at SwiftUI’s module interface file. A module interface is like a header file for Swift modules. It lists the module’s public declarations and even the implementations of inlinable functions. Module interfaces use normal Swift syntax and have the .swiftinterface file extension. SwiftUI’s module interface is located at: [Path to Xcode.app]/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/SwiftUI.framework/Modules/SwiftUI.swiftmodule/arm64e-apple-ios.swiftinterface (There can be multiple .swiftinterface files in that directory, one per CPU architecture. Pick any one of them. Pro tip for viewing the file in Xcode: Editor > Syntax Coloring > Swift enables syntax highlighting.) Inside, you’ll find that View.body has the @MainActor(unsafe) attribute: @available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *) @_typeEraser(AnyView) public protocol View { // … @SwiftUI.ViewBuilder @_Concurrency.MainActor(unsafe) var body: Self.Body { get } } And you’ll find this declaration for .task, including the @_inheritActorContext attribute: @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *) extension SwiftUI.View { #if compiler(>=5.3) && $AsyncAwait && $Sendable && $InheritActorContext @inlinable public func task( priority: _Concurrency.TaskPriority = .userInitiated, @_inheritActorContext _ action: @escaping @Sendable () async -> Swift.Void ) -> some SwiftUI.View { modifier(_TaskModifier(priority: priority, action: action)) } #endif // … } SwiftUI’s module interface file shows the @_inheritActorContext annotatation on View.task. Putting it all together Armed with this knowledge, everything makes more sense: When used inside body, task inherits the @MainActor context from body. When used outside of body, there is no implicit @MainActor annotation, so task will run its operation on the cooperative thread pool by default. Unless the view contains an @ObservedObject or @StateObject property, which makes the entire view @MainActor via this obscure rule for property wrappers whose wrappedValue property is bound to a global actor: A struct or class containing a wrapped instance property with a global actor-qualified wrappedValue infers actor isolation from that property wrapper Update May 1, 2024: SE-0401: Remove Actor Isolation Inference caused by Property Wrappers removes the above rule when compiling in Swift 6 language mode. This is a good change because it makes reasoning about actor isolation simpler. In the Swift 5 language mode, you can opt into the better behavior with the -enable-upcoming-feature DisableOutwardActorInference compiler flags. I recommend you do. The lesson: if you use helper properties or functions in your view, consider annotating them with @MainActor to get the same semantics as body. By the way, note that the actor context only applies to code that is placed directly inside the async closure, as well as to synchronous functions the closure calls. Async functions choose their own execution context, so any call to an async function can switch to a different executor. For example, if you call URLSession.data(from:) inside a main-actor-annotated function, the runtime will hop to the global cooperative executor to execute that method. See SE-0338: Clarify the Execution of Non-Actor-Isolated Async Functions for the precise rules. On Apple’s policy to hide annotations in documentation I understand Apple’s impetus not to show unofficial API or language features in the documentation lest developers get the preposterous idea to use these features in their own code! But it makes understanding so much harder. Before I saw the annotations in the .swiftinterface file, the behavior of the code at the beginning of this article never made sense to me. Hiding the details makes things seem like magic when they actually aren’t. And that’s not good, either.
Experimenting with Live Activities
iOS 16 beta 4 is the first SDK release that supports Live Activities. A Live Activity is a widget-like view an app can place on your lock screen and update in real time. Examples where this can be useful include live sports scores or train departure times. These are my notes on playing with the API and implementing my first Live Activity. A bike computer on your lock screen My Live Activity is a display for a bike computer that I’ve been developing with a group a friends. Here’s a video of it in action: Download video And here with simulated data: Download video I haven’t talked much about our bike computer project publicly yet; that will hopefully change someday. In short, a group of friends and I designed a little box that connects to your bike’s hub dynamo, measures speed and distance, and sends the data via Bluetooth to an iOS app. The app records all your rides and can also act as a live speedometer when mounted on your bike’s handlebar. It’s this last feature that I wanted to replicate in the Live Activity. Follow Apple’s guide Adding a Live Activity to the app wasn’t hard. I found Apple’s guide Displaying live data on the Lock Screen with Live Activities easy to follow and quite comprehensive. No explicit user approval iOS doesn’t ask the user for approval when an app wants to show a Live Activity. I found this odd since it seems to invite developers to abuse the feature, but maybe it’s OK because of the foreground requirement (see below). Plus, users can disallow Live Activities on a per-app basis in Settings. Users can dismiss an active Live Activity from the lock screen by swiping (like a notification). Most apps will probably need to ask the user for notification permissions to update their Live Activities. The app must be in the foreground to start an activity To start a Live Activity, an app must be open in the foreground. This isn’t ideal for the bike computer because the speedometer can’t appear magically on the lock screen when the user starts riding (even though iOS wakes up the app in the background at this point to deliver the Bluetooth events from the bike). The user has to open the app manually at least once. On the other hand, this limitation may not be an issue for most use cases and will probably cut down on spamming/abuse significantly. The app must keep running in the background to update the activity (or use push notifications) As long as the app keeps running (in the foreground or background), it can update the Live Activity as often as it wants (I think). This is ideal for the bike computer as the app keeps running in the background processing Bluetooth events while the bike is in motion. I assume the same applies to other apps that can remain alive in the background, such as audio players or navigation apps doing continuous location monitoring. Updating the Live Activity once per second was no problem in my testing, and I didn’t experience any rate limiting. Most apps get suspended in the background, however. They must use push notifications to update their Live Activity (or background tasks or some other mechanism to have the system wake you up). Apple introduced a new kind of push notification that is delivered directly to the Live Activity, bypassing the app altogether. I haven’t played with push notification updates, so I don’t know the benefits of using this method over sending a silent push notification to wake the app and updating the Live Activity from there. Probably less aggressive rate limiting? Lock screen color matching I haven’t found a good way to match my Live Activity’s colors to the current system colors on the lock screen. By default, text in a Live Activity is black in light mode, whereas the built-in lock screen themes seem to favor white or other light text colors. If there is an API or environment value that allows apps to match the color style of the current lock screen, I haven’t found it. I experimented with various foreground styles, such as materials, without success. I ended up hardcoding the foreground color, but I’m not satisfied with the result. Depending on the user’s lock screen theme, the Live Activity can look out of place. The default text color of a Live Activity in light mode is black. This doesn’t match most lock screen themes. Animations can’t be disabled Apple’s guide clearly states that developers have little control over animations in a Live Activity: Animate content updates When you define the user interface of your Live Activity, the system ignores any animation modifiers — for example, withAnimation(_:_:) and animation(_:value:) — and uses the system’s animation timing instead. However, the system performs some animation when the dynamic content of the Live Activity changes. Text views animate content changes with blurred content transitions, and the system animates content transitions for images and SF Symbols. If you add or remove views from the user interface based on content or state changes, views fade in and out. Use the following view transitions to configure these built-in transitions: opacity, move(edge:), slide, push(from:), or combinations of them. Additionally, request animations for timer text with numericText(countsDown:). It makes total sense to me that Apple doesn’t want developers to go crazy with animations on the lock screen, and perhaps having full control over animations also makes it easier for Apple to integrate Live Activities into the always-on display that’s probably coming on the next iPhone. What surprised me is that I couldn’t find a way to disable the text change animations altogether. I find the blurred text transitions for the large speed value quite distracting and I think this label would look better without any animations. But no combination of .animation(nil), .contentTransition(.identity), and .transition(.identity) would do this. Sharing code between app and widget A Live Activity is very much like a widget: the UI must live in your app’s widget extension. You start the Live Activity with code that runs in your app, though. Both targets (the app and the widget extension) need access to a common data type that represents the data the widget displays. You should have a third target (a framework or SwiftPM package) that contains such shared types and APIs and that the downstream targets import. Availability annotations Update September 22, 2022: This limitation no longer applies. The iOS 16.1 SDK added the ability to have availability conditions in WidgetBundle. Source: Tweet from Luca Bernardi (2022-09-20). WidgetBundle apparently doesn’t support widgets with different minimum deployment targets. If your widget extension has a deployment target of iOS 14 or 15 for an existing widget and you now want to add a Live Activity, I’d expect your widget bundle to look like this: @main struct MyWidgets: WidgetBundle { var body: some Widget { MyNormalWidget() // Error: Closure containing control flow statement cannot // be used with result builder 'WidgetBundleBuilder' if #available(iOSApplicationExtension 16.0, *) { MyLiveActivityWidget() } } } But this doesn’t compile because the result builder type used by WidgetBundle doesn’t support availability conditions. I hope Apple fixes this. This wasn’t a problem for me because our app didn’t have any widgets until now, so I just set the deployment target of the widget extension to iOS 16.0. If you have existing widgets and can’t require iOS 16 yet, a workaround is to add a second widget extension target just for the Live Activity. I haven’t tried this, but WidgetKit explicitly supports having multiple widget extensions, so it should work: Typically, you include all your widgets in a single widget extension, although your app can contain multiple extensions.
How @MainActor works
@MainActor is a Swift annotation to coerce a function to always run on the main thread and to enable the compiler to verify this. How does this work? In this article, I’m going to reimplement @MainActor in a slightly simplified form for illustration purposes, mainly to show how little “magic” there is to it. The code of the real implementation in the Swift standard library is available in the Swift repository. @MainActor relies on two Swift features, one of them unofficial: global actors and custom executors. Global actors MainActor is a global actor. That is, it provides a single actor instance that is shared between all places in the code that are annotated with @MainActor. All global actors must implement the shared property that’s defined in the GlobalActor protocol (every global actor implicitly conforms to this protocol): @globalActor final actor MyMainActor { // Requirements from the implicit GlobalActor conformance typealias ActorType = MyMainActor static var shared: ActorType = MyMainActor() // Don’t allow others to create instances private init() {} } At this point, we have a global actor that has the same semantics as any other actor. That is, functions annotated with @MyMainActor will run on a thread in the cooperative thread pool managed by the Swift runtime. To move the work to the main thread, we need another concept, custom executors. Executors A bit of terminology: The compiler splits async code into jobs. A job roughly corresponds to the code from one await (= potential suspension point) to the next. The runtime submits each job to an executor. The executor is the object that decides in which order and in which context (i.e. which thread or dispatch queue) to run the jobs. Swift ships with two built-in executors: the default concurrent executor, used for “normal”, non-actor-isolated async functions, and a default serial executor. Every actor instance has its own instance of this default serial executor and runs its code on it. Since the serial executor, like a serial dispatch queue, only runs a single job at a time, this prevents concurrent accesses to the actor’s state. Custom executors As of Swift 5.6, executors are an implementation detail of Swift’s concurrency system, but it’s almost certain that they will become an official feature fairly soon. Why? Because it can sometimes be useful to have more control over the execution context of async code. Some examples are listed in a draft proposal for allowing developers to implement custom executors that was first pitched in February 2021 but then didn’t make the cut for Swift 5.5. @MainActor already uses the unofficial ability for an actor to provide a custom executor, and we’re going to do the same for our reimplementation. A serial executor that runs its job on the main dispatch queue is implemented as follows. The interesting bit is the enqueue method, where we tell the job to run on the main dispatch queue: final class MainExecutor: SerialExecutor { func asUnownedSerialExecutor() -> UnownedSerialExecutor { UnownedSerialExecutor(ordinary: self) } func enqueue(_ job: UnownedJob) { DispatchQueue.main.async { job._runSynchronously(on: self.asUnownedSerialExecutor()) } } } We’re responsible for keeping an instance of the executor alive, so let’s store it in a global: private let mainExecutor = MainExecutor() Finally, we need to tell our global actor to use the new executor: import Dispatch @globalActor final actor MyMainActor { // ... // Requirement from the implicit GlobalActor conformance static var sharedUnownedExecutor: UnownedSerialExecutor { mainExecutor.asUnownedSerialExecutor() } // Requirement from the implicit Actor conformance nonisolated var unownedExecutor: UnownedSerialExecutor { mainExecutor.asUnownedSerialExecutor() } } That’s all there is to reimplement the basics of @MainActor. Conclusion The full code is on GitHub, including a usage example to demonstrate that the @MyMainActor annotations work. John McCall’s draft proposal for custom executors is worth reading, particularly the philosophy section. It’s an easy-to-read summary of some of the design principles behind Swift’s concurrency system: Swift’s concurrency design sees system threads as expensive and rather precious resources. … It is therefore best if the system allocates a small number of threads — just enough to saturate the available cores — and for those threads [to] only block for extended periods when there is no pending work in the program. Individual functions cannot effectively make this decision about blocking, because they lack a holistic understanding of the state of the program. Instead, the decision must be made by a centralized system which manages most of the execution resources in the program. This basic philosophy of how best to use system threads drives some of the most basic aspects of Swift’s concurrency design. In particular, the main reason to add async functions is to make it far easier to write functions that, unlike standard functions, will reliably abandon a thread when they need to wait for something to complete. And: The default concurrent executor is used to run jobs that don’t need to run somewhere more specific. It is based on a fixed-width thread pool that scales to the number of available cores. Programmers therefore do not need to worry that creating too many jobs at once will cause a thread explosion that will starve the program of resources.
AttributedString’s Codable format and what it has to do with Unicode
Here’s a simple AttributedString with some formatting: import Foundation let str = try! AttributedString( markdown: "Café **Sol**", options: .init(interpretedSyntax: .inlineOnly) ) AttributedString is Codable. If your task was to design the encoding format for an attributed string, what would you come up with? Something like this seems reasonable (in JSON with comments): { "text": "Café Sol", "runs": [ { // start..<end in Character offsets "range": [5, 8], "attrs": { "strong": true } } ] } This stores the text alongside an array of runs of formatting attributes. Each run consists of a character range and an attribute dictionary. Unicode is complicated But this format is bad and can break in various ways. The problem is that the character offsets that define the runs aren’t guaranteed to be stable. The definition of what constitutes a Character, i.e. a user-perceived character, or a Unicode grapheme cluster, can and does change in new Unicode versions. If we decoded an attributed string that had been serialized on a different OS version (before Swift 5.6, Swift used the OS’s Unicode library for determining character boundaries), or by code compiled with a different Swift version (since Swift 5.6, Swift uses its own grapheme breaking algorithm that will be updated alongside the Unicode standard)1, the character ranges might no longer represent the original intent, or even become invalid. Update April 11, 2024: See this Swift forum post I wrote for an example where the Unicode rules for grapheme cluster segmentation changed for flag emoji. This change caused a corresponding change in how Swift counts the Characters in a string containing consecutive flags, such as "🇦🇷🇯🇵". Normalization forms So let’s use UTF-8 byte offsets for the ranges, I hear you say. This avoids the first issue but still isn’t safe, because some characters, such as the é in the example string, have more than one representation in Unicode: it can be either the standalone character é (Latin small letter e with acute) or the combination of e + ◌́ (Combining acute accent). The Unicode standard calls these variants normalization forms.2 The first form needs 2 bytes in UTF-8, whereas the second uses 3 bytes, so subsequent ranges would be off by one if the string and the ranges used different normalization forms. Now in theory, the string itself and the ranges should use the same normalization form upon serialization, avoiding the problem. But this is almost impossible to guarantee if the serialized data passes through other systems that may (inadvertently or not) change the Unicode normalization of the strings that pass through them. A safer option would be to store the text not as a string but as a blob of UTF-8 bytes, because serialization/networking/storage layers generally don’t mess with binary data. But even then you’d have to be careful in the encoding and decoding code to apply the formatting attributes before any normalization takes place. Depending on how your programming language handles Unicode, this may not be so easy. Foundation’s solution The people on the Foundation team know all this, of course, and chose a better encoding format for Attributed String. Let’s take a look.3 let encoder = JSONEncoder() encoder.outputFormatting = [.prettyPrinted, .sortedKeys] let jsonData = try encoder.encode(str) let json = String(decoding: jsonData, as: UTF8.self) This is how our sample string is encoded: [ "Café ", { }, "Sol", { "NSInlinePresentationIntent" : 2 } ] This is an array of runs, where each run consists of a text segment and a dictionary of formatting attributes. The important point is that the formatting attributes are directly associated with the text segments they belong to, not indirectly via brittle byte or character offsets. (This encoding format is also more space-efficient and possibly better represents the in-memory layout of AttributedString, but that’s beside the point for this discussion.) There’s still a (smaller) potential problem here if the character boundary rules change for code points that span two adjacent text segments: the last character of run N and the first character of run N+1 might suddenly form a single character (grapheme cluster) in a new Unicode version. In that case, the decoding code will have to decide which formatting attributes to apply to this new character. But this is a much smaller issue because it only affects the characters in question. Unlike our original example, where an off-by-one error in run N would affect all subsequent runs, all other runs are untouched. Related forum discussion: Itai Ferber on why Character isn’t Codable. Storing string offsets is a bad idea We can extract a general lesson out of this: Don’t store string indices or offsets if possible. They aren’t stable over time or across runtime environments. On Apple platforms, the Swift standard library ships as part of the OS so I’d guess that the standard library’s grapheme breaking algorithm will be based on the same Unicode version that ships with the corresponding OS version. This is effectively no change in behavior compared to the pre-Swift 5.6 world (where the OS’s ICU library determined the Unicode version). On non-ABI-stable platforms (e.g. Linux and Windows), the Unicode version used by your program is determined by the version of the Swift compiler your program is compiled with, if my understanding is correct. ↩︎ The Swift standard library doesn’t have APIs for Unicode normalization yet, but you can use the corresponding NSString APIs, which are automatically added to String when you import Foundation: import Foundation let precomposed = "é".precomposedStringWithCanonicalMapping let decomposed = "é".decomposedStringWithCanonicalMapping precomposed == decomposed // → true precomposed.unicodeScalars.count // → 1 decomposed.unicodeScalars.count // → 2 precomposed.utf8.count // → 2 decomposed.utf8.count // → 3 ↩︎ By the way, I see a lot of code using String(jsonData, encoding: .utf8)! to create a string from UTF-8 data. String(decoding: jsonData, as: UTF8.self) saves you a force-unwrap and is arguably “cleaner” because it doesn’t depend on Foundation. Since it never fails, it’ll insert replacement characters into the string if it encounters invalid byte sequences. ↩︎
A heterogeneous dictionary with strong types in Swift
The environment in SwiftUI is sort of like a global dictionary but with stronger types: each key (represented by a key path) can have its own specific value type. For example, the \.isEnabled key stores a boolean value, whereas the \.font key stores an Optional<Font>. I wrote a custom dictionary type that can do the same thing. The HeterogeneousDictionary struct I show in this article stores mixed key-value pairs where each key defines the type of value it stores. The public API is fully type-safe, no casting required. Usage I’ll start with an example of the finished API. Here’s a dictionary for storing text formatting attributes: import AppKit var dict = HeterogeneousDictionary<TextAttributes>() dict[ForegroundColor.self] // → nil // The value type of this key is NSColor dict[ForegroundColor.self] = NSColor.systemRed dict[ForegroundColor.self] // → NSColor.systemRed dict[FontSize.self] // → nil // The value type of this key is Double dict[FontSize.self] = 24 dict[FontSize.self] // → 24 (type: Optional<Double>) We also need some boilerplate to define the set of keys and their associated value types. The code to do this for three keys (font, font size, foreground color) looks like this: // The domain (aka "keyspace") enum TextAttributes {} struct FontSize: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = Double } struct Font: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = NSFont } struct ForegroundColor: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = NSColor } Yes, this is fairly long, which is one of the downsides of this approach. At least you only have to write it once per “keyspace”. I’ll walk you through it step by step. Notes on the API Using types as keys As you can see in this line, the dictionary keys are types (more precisely, metatype values): dict[FontSize.self] = 24 This is another parallel with the SwiftUI environment, which also uses types as keys (the public environment API uses key paths as keys, but you’ll see the types underneath if you ever define your own environment key). Why use types as keys? We want to establish a relationship between a key and the type of values it stores, and we want to make this connection known to the type system. The way to do this is by defining a type that sets up this link. Domains aka “keyspaces” A standard Dictionary is generic over its key and value types. This doesn’t work for our heterogeneous dictionary because we have multiple value types (and we want more type safety than Any provides). Instead, a HeterogeneousDictionary is parameterized with a domain: // The domain (aka "keyspace") enum TextAttributes {} var dict = HeterogeneousDictionary<TextAttributes>() The domain is the “keyspace” that defines the set of legal keys for this dictionary. Only keys that belong to the domain can be put into the dictionary. The domain type has no protocol constraints; you can use any type for this. Defining keys A key is a type that conforms to the HeterogeneousDictionaryKey protocol. The protocol has two associated types that define the relationships between the key and its domain and value type: protocol HeterogeneousDictionaryKey { /// The "namespace" the key belongs to. associatedtype Domain /// The type of values that can be stored /// under this key in the dictionary. associatedtype Value } You define a key by creating a type and adding the conformance: struct Font: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = NSFont } Implementation notes A minimal implementation of the dictionary type is quite short: struct HeterogeneousDictionary<Domain> { private var storage: [ObjectIdentifier: Any] = [:] var count: Int { self.storage.count } subscript<Key>(key: Key.Type) -> Key.Value? where Key: HeterogeneousDictionaryKey, Key.Domain == Domain { get { self.storage[ObjectIdentifier(key)] as! Key.Value? } set { self.storage[ObjectIdentifier(key)] = newValue } } } Internal storage private var storage: [ObjectIdentifier: Any] = [:] Internally, HeterogeneousDictionary uses a dictionary of type [ObjectIdentifier: Any] for storage. We can’t use a metatype such as Font.self directly as a dictionary key because metatypes aren’t hashable. But we can use the metatype’s ObjectIdentifier, which is essentially the address of the type’s representation in memory. Subscript subscript<Key>(key: Key.Type) -> Key.Value? where Key: HeterogeneousDictionaryKey, Key.Domain == Domain { get { self.storage[ObjectIdentifier(key)] as! Key.Value? } set { self.storage[ObjectIdentifier(key)] = newValue } } The subscript implementation constrains its arguments to keys in the same domain as the dictionary’s domain. This ensures that you can’t subscript a dictionary for text attributes with some other unrelated key. If you find this too restrictive, you could also remove all references to the Domain type from the code; it would still work. Using key paths as keys Types as keys don’t have the best syntax. I think you’ll agree that dict[FontSize.self] doesn’t read as nice as dict[\.fontSize], so I looked into providing a convenience API based on key paths. My preferred solution would be if users could define static helper properties on the domain type, which the dictionary subscript would then accept as key paths, like so: extension TextAttributes { static var fontSize: FontSize.Type { FontSize.self } // Same for font and foregroundColor } Sadly, this doesn’t work because Swift 5.6 doesn’t (yet?) support key paths to static properties (relevant forum thread). We have to introduce a separate helper type that acts as a namespace for these helper properties. Since the dictionary type can create an instance of the helper type, it can access the non-static helper properties. This doesn’t feel as clean to me, but it works. I called the helper type HeterogeneousDictionaryValues as a parallel with EnvironmentValues, which serves the same purpose in SwiftUI. The code for this is included in the Gist. Drawbacks Is the HeterogeneousDictionary type useful? I’m not sure. I wrote this mostly as an exercise and haven’t used it yet in a real project. In most cases, if you need a heterogeneous record with full type safety, it’s probably easier to just write a new struct where each property is optional — the boilerplate for defining the dictionary keys is certainly longer and harder to read. For representing partial values, i.e. struct-like records where some but not all properties have values, take a look at these two approaches from 2018: Ian Keen, Type-safe temporary models (2018-06-05) Joseph Duffy, Partial in Swift (2018-07-10), also available as a library These use a similar storage approach (a dictionary of Any values with custom accessors to make it type-safe), but they use an existing struct as the domain/keyspace, combined with partial key paths into that struct as the keys. I honestly think that this is the better design for most situations. Aside from the boilerplate, here are a few more weaknesses of HeterogeneousDictionary: Storage is inefficient because values are boxed in Any containers Accessing values is inefficient: every access requires unboxing HeterogeneousDictionary can’t easily conform to Sequence and Collection because these protocols require a uniform element type The code The full code is available in a Gist.
Advanced Swift, fifth edition
We released the fifth edition of our book Advanced Swift a few days ago. You can buy the ebook on the objc.io site. The hardcover print edition is printed and sold by Amazon (amazon.com, amazon.co.uk, amazon.de). Highlights of the new edition: Fully updated for Swift 5.6 A new Concurrency chapter covering async/await, structured concurrency, and actors New content on property wrappers, result builders, protocols, and generics The print edition is now a hardcover (for the same price) Free update for owners of the ebook A growing book for a growing language Updating the book always turns out to be more work than I expect. Swift has grown substantially since our last release (for Swift 5.0), and the size of the book reflects this. The fifth edition is 76 % longer than the first edition from 2016. This time, we barely stayed under 1 million characters: Character counts of Advanced Swift editions from 2016–2022. Many thanks to our editor, Natalye, for reading all this and improving our Dutch/German dialect of English. Hardcover For the first time, the print edition comes in hardcover (for the same price). Being able to offer this makes me very happy. The hardcover book looks much better and is more likely to stay open when laid flat on a table. We also increased the page size from 15×23 cm (6×9 in) to 18×25 cm (7×10 in) to keep the page count manageable (Amazon’s print on demand service limits hardcover books to 550 pages). I hope you enjoy the new edition. If you decide to buy the book or if you bought it in the past, thank you very much! And if you’re willing to write a review on Amazon, we’d appreciate it.
Synchronous functions can support cancellation too
Cancellation is a Swift concurrency feature, but this doesn’t mean it’s only available in async functions. Synchronous functions can also support cancellation, and by doing so they’ll become better concurrency citizens when called from async code. Motivating example: JSONDecoder Supporting cancellation makes sense for functions that can block for significant amounts of time (say, more than a few milliseconds). Take JSON decoding as an example. Suppose we wrote an async function that performs a network request and decodes the downloaded JSON data: import Foundation func loadJSON<T: Decodable>(_ type: T.Type, from url: URL) async throws -> T { let (data, _) = try await URLSession.shared.data(from: url) return try JSONDecoder().decode(type, from: data) } The JSONDecoder.decode call is synchronous: it will block its thread until it completes. And if the download is large, decoding may take hundreds of milliseconds or even longer. Avoid blocking if possible In general, async code should avoid calling blocking APIs if possible. Instead, async functions are expected to suspend regularly to give waiting tasks a chance to run. But JSONDecoder doesn’t have an async API (yet?), and I’m not even sure it can provide one that works with the existing Codable protocols, so let’s work with what we have. And if you think about it, it’s not totally unreasonable for JSONDecoder to block. After all, it is performing CPU-intensive work (assuming the data it’s working on doesn’t have to be paged in), and this work has to happen on some thread. Async/await works best for I/O-bound functions that spend most of their time waiting for the disk or the network. If an I/O-bound function suspends, the runtime can give the function’s thread to another task that can make more productive use of the CPU. Responding to cancellation Cancellation is a cooperative process. Canceling a task only sets a flag in the task’s metadata. It’s up to individual functions to periodically check for cancellation and abort if necessary. If a function doesn’t respond promptly to cancellation or outright ignores the cancellation flag, the program may appear to the user to be stalling. Now, if the task is canceled while JSONDecoder.decode is running, our loadJSON function can’t react properly because it can’t interrupt the decoding process. To fix this, the decode method would have to perform its own periodic cancellation checks, using the usual APIs, Task.isCancelled or Task.checkCancellation(). These can be called from anywhere, including synchronous code. Internals How does this work? How can synchronous code access task-specific metadata? Here’s the code for Task.isCancelled in the standard library: extension Task where Success == Never, Failure == Never { public static var isCancelled: Bool { withUnsafeCurrentTask { task in task?.isCancelled ?? false } } } This calls withUnsafeCurrentTask to get a handle to the current task. When the runtime schedules a task to run on a particular thread, it stores a pointer to the task object in that thread’s thread-local storage, where any code running on that thread – sync or async – can access it. If task == nil, there is no current task, i.e. we haven’t been called (directly or indirectly) from an async function. In this case, cancellation doesn’t apply, so we can return false. If we do have a task handle, we ask the task for its isCancelled flag and return that. Reading the flag is an atomic (thread-safe) operation because other threads may be writing to it concurrently. Conclusion I hope we’ll see cancellation support in the Foundation encoders and decoders in the future. If you have written synchronous functions that can potentially block their thread for a significant amount of time, consider adding periodic cancellation checks. It’s a quick way to make your code work better with the concurrency system, and you don’t even have to change your API to do it. Update February 2, 2022: Jordan Rose argues that cancellation support for synchronous functions should be opt-in because it introduces a failure mode that’s hard to reason about locally as the “source“ of the failure (the async context) may be several levels removed from the call site. Definitely something to consider!
Cancellation can come in many forms
In Swift’s concurrency model, cancellation is cooperative. To be a good concurrency citizen, code must periodically check if the current task has been cancelled, and react accordingly. You can check for cancellation by calling Task.isCancelled or with try Task.checkCancellation() — the latter will exit by throwing a CancellationError if the task has been cancelled. By convention, functions should react to cancellation by throwing a CancellationError. But this convention isn’t enforced, so callers must be aware that cancellation can manifest itself in other forms. Here are some other ways how functions might respond to cancellation: Throw a different error. For example, the async networking APIs in Foundation, such as URLSession.data(from: URL), throw a URLError with the code URLError.Code.cancelled on cancellation. It’d be nice if URLSession translated this error to CancellationError, but it doesn’t. Return a partial result. A function that has completed part of its work when cancellation occurs may choose to return a partial result rather than throwing the work away and aborting. In fact, this may be the best choice for a non-throwing function. But note that this behavior can be extremely surprising to callers, so be sure to document it clearly. Do nothing. Functions are supposed to react promptly to cancellation, but callers must assume the worst. Even if cancelled, a function might run to completion and finish normally. Or it might eventually respond to cancellation by aborting, but not promptly because it doesn’t perform its cancellation checks often enough. So as the caller of a function, you can’t really rely on specific cancellation behavior unless you know how the callee is implemented. Code that wants to know if its task has been cancelled should itself call Task.isCancelled, rather than counting on catching a CancellationError from a callee.

Software Development News
Report: 71% of tech leaders won’t hire devs without AI skills
- Latest News
- AI
- Infragistics
As AI becomes more ingrained within the software development life cycle, tech leaders making hiring decisions are saying AI and machine learning are becoming non-negotiable skills. 71% of respondents to a new study from Infragistics say that they won’t hire developers without those skills. The 2025 App Development Trends Report, conducted in partnership with Dynata, … continue reading
The post Report: 71% of tech leaders won’t hire devs without AI skills appeared first on SD Times.
As AI becomes more ingrained within the software development life cycle, tech leaders making hiring decisions are saying AI and machine learning are becoming non-negotiable skills. 71% of respondents to a new study from Infragistics say that they won’t hire developers without those skills. The 2025 App Development Trends Report, conducted in partnership with Dynata, features insights from over 300 U.S. tech leaders surveyed between December 2024 and January 2025. Thirty percent of respondents said that one of their top challenges this year is recruiting qualified developers. In addition to hiring for AI skills, 53% of leaders are also looking for cloud computing skills, 35% are looking for problem solving skills, and 35% are looking for developers who use secure coding practices. “AI is rapidly transforming how businesses develop applications–from streamlining workflows to mitigating security risks—but the technology alone isn’t powerful without a skilled team behind it,” said Jason Beres, COO of Infragistics. “As companies look to expand the AI use within their business, hiring developers skilled in AI and machine learning, along with investing in upskilling, is critical to their ability to drive innovation and remain competitive.” Other key challenges that tech leaders are dealing with are cybersecurity threats (45%), implementing AI (37%), and retaining qualified developers (35%). The survey found that 87% of teams are currently using AI in their development process, and of the companies not using AI at the moment, 45% say they are likely to start within the next year. The biggest use cases for AI in development are automating repetitive tasks (40%), creating layout and pages (34%), and detecting bugs. About a third of leaders believe AI is freeing up developers to spend time on more meaningful work. The full survey can be found on Infragistics’ website here. The post Report: 71% of tech leaders won’t hire devs without AI skills appeared first on SD Times.
Slack’s AI search now works across an organization’s entire knowledge base
- Latest News
- AI
- Slack
Slack is introducing a number of new AI-powered tools to make team collaboration easier and more intuitive. “Today, 60% of organizations are using generative AI. But most still fall short of its productivity promise. We’re changing that by putting AI where work already happens — in your messages, your docs, your search — all designed … continue reading
The post Slack’s AI search now works across an organization’s entire knowledge base appeared first on SD Times.
Slack is introducing a number of new AI-powered tools to make team collaboration easier and more intuitive. “Today, 60% of organizations are using generative AI. But most still fall short of its productivity promise. We’re changing that by putting AI where work already happens — in your messages, your docs, your search — all designed to be intuitive, secure, and built for the way teams actually work,” Slack wrote in a blog post. The new enterprise search capability will enable users to search not just in Slack, but any app that is connected to Slack. It can search across systems of record like Salesforce or Confluence, file repositories like Google Drive or OneDrive, developer tools like GitHub or Jira, and project management tools like Asana. “Enterprise search is about turning fragmented information into actionable insights, helping you make quicker, more informed decisions, without leaving Slack,” the company explained. The platform is also getting AI-generated channel recaps and thread summaries, helping users catch up on conversations quickly. It is introducing AI-powered translations as well to enable users to read and respond in their preferred language. Enterprise search, recaps, and translations are now generally available, and the company also revealed some additional upcoming AI features that will be added to the platform soon, including AI message explanations, AI action items, AI writing assistance in canvas, AI profile summaries, and a unified files view. AI message explanations will provide an instant explanation of a message by hovering over it, AI profile summaries provide context about a team member’s role and recent work, and AI-generated action items will be created when a user is mentioned in a message that includes a follow-up, deadline, or request. AI writing assistance will be able to summarize key points, extract action items, generate a first draft, or rewrite content to fit a desired tone. Finally, Slack will be introducing unified file views, bringing all canvases, lists, and shared documents into a single organized space. Previously these features were separated into different tabs of the app. “As collaboration scales, so does the need to keep content organized and accessible. By centralizing your content, Slack helps your team reduce friction and keep work moving. It’s another step in building a work OS that is not only powerful but also delightfully simple,” the company said. In terms of privacy and security, Slack clarified that it does not use customer data to train generative AI models. It also says that the AI will only surface information that a user is already allowed to access. Additionally, all of the platform’s AI features comply with Slack’s existing enterprise-grade security and compliance standards. These new AI features are only available in Slack’s paid plans. The Pro plan includes basic AI summarization for channels, threads, and huddles. The Business+ plan includes everything in Pro, as well as recaps, translations, workflow generation, and AI-powered search. The Enterprise+ plan includes all of Slack’s AI features, including enterprise search, evolved task management, and enterprise-grade security and governance controls. The post Slack’s AI search now works across an organization’s entire knowledge base appeared first on SD Times.
Anthropic’s Claude Code gets new analytics dashboard to provide insights into how teams are using AI tooling
- Latest News
- AI
- anthropic
- claude code
Anthropic has announced the launch of a new analytics dashboard in Claude Code to give development teams insights into how they are using the tool. It tracks metrics such as lines of code accepted, suggestion acceptance rate, total user activity over time, total spend over time, average daily spend for each user, and average daily … continue reading
The post Anthropic’s Claude Code gets new analytics dashboard to provide insights into how teams are using AI tooling appeared first on SD Times.
Anthropic has announced the launch of a new analytics dashboard in Claude Code to give development teams insights into how they are using the tool. It tracks metrics such as lines of code accepted, suggestion acceptance rate, total user activity over time, total spend over time, average daily spend for each user, and average daily lines of code accepted for each user. These metrics can help organizations understand developer satisfaction with Claude Code suggestions, track code generation effectiveness, and identify opportunities for process improvements. According to Anthropic, by tracking these metrics, development teams will be able to better assess the ROI of AI and see where they are getting the most value. The company says that an analytics dashboard was one of the most requested features from its enterprise customers, and that it marks another step in the company’s mission to enable engineering teams to adapt their AI practices as things evolve. The analytics dashboard is available to organizations that are using Claude Code with the Anthropic API through the Anthropic Console. The post Anthropic’s Claude Code gets new analytics dashboard to provide insights into how teams are using AI tooling appeared first on SD Times.
JetBrains updates Junie, Gemini API adds embedding model, and more – Daily News Digest
- Latest News
- AI
- Amazon
- gemini
- JetBrains
JetBrains announces updates to its coding agent Junie Junie is now fully integrated into GitHub, enabling asynchronous development with features such as the ability to delegate multiple tasks simultaneously, the ability to make quick fixes without opening the IDE, team collaboration directly in GitHub, and seamless switching between the IDE and GitHub. Junie on GitHub … continue reading
The post JetBrains updates Junie, Gemini API adds embedding model, and more – Daily News Digest appeared first on SD Times.
JetBrains announces updates to its coding agent Junie Junie is now fully integrated into GitHub, enabling asynchronous development with features such as the ability to delegate multiple tasks simultaneously, the ability to make quick fixes without opening the IDE, team collaboration directly in GitHub, and seamless switching between the IDE and GitHub. Junie on GitHub is currently in an early access program and only supports JVM and PHP. JetBrains also added support for MCP to enable Junie to connect to external sources. Other new features include 30% faster task completion speed and support for remote development on macOS and Linux. Gemini API gets first embedding model These types of models generate embeddings for words, phrases, sentences, and code, to provide context-aware results that are more accurate than keyword-based approaches. “They efficiently retrieve relevant information from knowledge bases, represented by embeddings, which are then passed as additional context in the input prompt to language models, guiding it to generate more informed and accurate responses,” the Gemini docs say. The embedding model in the Gemini API supports over 100 languages and a 2048 input token length. It will be offered via both free and paid tiers to enable developers to experiment with it for free and then scale up as needed. Amazon adds new capabilities to SageMaker Users can now launch Amazon QuickSight from within SageMaker Unified Studio to build dashboards using project data and share them to the Amazon SageMaker Catalog for discovery across their organization. In addition, support was added for Amazon S3 general purpose buckets to enable users to find and collaborate on data and S3 Access Grants to ensure fine-grained access control. Users can now onboard AWS Glue Data Catalog datasets into SageMaker catalog as well. “These new SageMaker capabilities address the complete data lifecycle within a unified and governed experience. You get automatic onboarding of existing structured data from your lakehouse, seamless cataloging of unstructured data content in Amazon S3, and streamlined visualization through QuickSight—all with consistent governance and access controls,” AWS wrote in a blog post. The post JetBrains updates Junie, Gemini API adds embedding model, and more – Daily News Digest appeared first on SD Times.
Native vs hybrid vs cross-platform: Resolving the trilemma
- Latest News
- mobile
- mobile development
- software development
Any company planning to build a mobile app encounters a fundamental choice – what development method to choose? But unless you have extensive mobile development experience, choosing between native, hybrid, and cross-platform approaches, the most common ones nowadays, is a challenging task. This is because the approaches differ significantly in complexity and app development timelines … continue reading
The post Native vs hybrid vs cross-platform: Resolving the trilemma appeared first on SD Times.
Any company planning to build a mobile app encounters a fundamental choice – what development method to choose? But unless you have extensive mobile development experience, choosing between native, hybrid, and cross-platform approaches, the most common ones nowadays, is a challenging task. This is because the approaches differ significantly in complexity and app development timelines and cost, so you should understand all these differences clearly to determine one that can better suit your project’s needs. What are native apps? Native mobile applications are compatible with one operating system, predominantly Android or iOS. Native apps only use programming languages supported by their respective target platforms, such as Kotlin/Java for Android and Swift/Objective-C for iOS. Pros Extensive access to hardware Native apps can deeply integrate with the device’s software and hardware, fully accessing all device functionalities (GPS, camera, storage, etc.), which helps ensure superb app performance, as well as more dynamic and engaging experiences for users. Recognizable look and feel Native apps are inherently tailored to the look and feel of a target operating system so they provide users, already accustomed to the platform’s UI, with more intuitive experiences and make them feel more comfortable. Cons Limited user reach As a native app is only compatible with one operating system, its potential user base is limited to this particular platform, which can hinder a business’s ability to achieve quick business growth. Increased development time and costs If a company wants to reach a broader audience beyond one operating system, it will have to develop several native apps for each operating system, which requires extensive time and investment. What are hybrid apps? Hybrid applications are built using web technologies and run inside a native app wrapper. Although hybrid apps are essentially web apps, the native shell enables them to provide native-like experiences to users across different platforms. Pros Cost and time efficiency Hybrid apps reuse a single web codebase written in HTML, CSS, and JavaScript among multiple mobile operating systems, which can significantly reduce development time and costs for companies targeting broader audiences. Extensive developer pool As hybrid apps leverage web technologies, which are generally considered more common and widely adopted among developers, a company can find the required engineering talent easier, avoiding hiring and training additional IT staff. Cons Limited hardware access Hybrid apps can face limitations related to integration with the device’s native capabilities, which can hinder their interaction with platform-specific functionalities and result in lower performance and less seamless user experiences. Security concerns As hybrid apps use web technologies, which are typically more exposed to potential hacker attacks, they can be more vulnerable to cybersecurity threats, so a company should be ready to implement additional security measures. Lack of offline functionality As hybrid apps blend capabilities of native and web apps, they typically require internet connectivity to function, unlike native and cross-platform apps which can provide offline functionality and work well even when internet connectivity is poor or unavailable. What are cross-platform apps? Cross-platform apps are inherently compatible with both Android and iOS operating systems, meaning a single app can seamlessly run on multiple platforms. Pros Quick and cost-effective development Although cross-platform apps can require platform-specific optimizations, which makes them harder to develop compared to hybrid apps, this approach also enables teams to use a single codebase, resulting in increased development speed and cost-efficiency compared to native apps. Native-like look and feel While no application can match a native app in terms of UI, cross-platform apps still look, feel, and operate very much like native apps, ensuring optimal user experiences across devices. In this regard, cross-platform apps are superior to hybrid apps, as they do not rely on web technologies, which can lead to inconsistencies across the UI when an app is viewed on a mobile device. Cons The need for broader developer expertise As mentioned earlier, cross-platform apps often need additional optimizations regarding design, performance, or functionality, which means a company may need developers experienced in various platforms and programming languages. Extensive app size Cross-platform apps are often larger compared to native and hybrid apps, as they can include runtime and libraries required for cross-platform frameworks to operate, which can negatively impact user experience due to increased load time. How to choose between native, hybrid, and cross-platform? To understand which approach better suits your unique project requirements, consider the following factors: User experience If you want to provide users with the most seamless and engaging experiences possible, consider native app development, as it outperforms other approaches in this regard. But if you target a broad audience and building multiple native apps seems too costly, consider cross-platform development as an alternative. Project budget Since all three options vary in complexity and cost, you should also consider your budget constraints when making the decision. If you want to provide mobile apps for multiple operating systems but your budget is strictly limited and you are unlikely to afford native development, consider cross-platform or hybrid approaches. Time to market Hybrid app development generally takes less time than native and cross-platform development. So, if the time to market is your highest priority (for example, if you are launching a startup and want to test an app idea), opt for building a hybrid app. Team expertise Last but not least, you should also evaluate the technical knowledge and development skills of your team, as it can shed light on which approach to choose. For example, if your team excels at using frameworks like React Native and Flutter, cross-platform development may be a more feasible option. However, if your team is primarily composed of experienced web developers, hybrid development is your top choice. Final thoughts Native, hybrid, and cross-platform development are three distinct approaches to building mobile apps, each having different purposes, strengths, and downsides. You can refer to the information featured in this article to understand the differences and better comprehend the applications of each approach, which is crucial for making an informed choice. The post Native vs hybrid vs cross-platform: Resolving the trilemma appeared first on SD Times.
Kong AI Gateway 3.11 introduces new method for reducing token costs
- Latest News
- AI
- Kong
Kong has introduced the latest update to Kong AI Gateway, a solution for securing, governing, and controlling LLM consumption from popular third-party providers. Kong AI Gateway 3.11 introduces a new plugin that reduces token costs, several new generative AI capabilities, and support for AWS Bedrock Guardrails. The new prompt compression plugin that removes padding and … continue reading
The post Kong AI Gateway 3.11 introduces new method for reducing token costs appeared first on SD Times.
Kong has introduced the latest update to Kong AI Gateway, a solution for securing, governing, and controlling LLM consumption from popular third-party providers. Kong AI Gateway 3.11 introduces a new plugin that reduces token costs, several new generative AI capabilities, and support for AWS Bedrock Guardrails. The new prompt compression plugin that removes padding and redundant words or phrases. This approach preserves 80% of the intended semantic meaning of the prompt, but the removal of unnecessary words can lead to up to a 5x reduction in cost. According to Kong, the prompt compression plugin complements other cost-saving measures, such as Semantic Caching to prevent redundant LLM calls and AI Rate Limiting to manage usage limits by application or team. This update also adds over 10 new generative AI capabilities, including batch execution of multiple LLM calls, audio transcription and translation, image generation, stateful assistants, and enhanced response introspection. Finally, Kong AI Gateway 3.11 adds support for AWS Bedrock Guardrails, which can help protect AI applications from malicious and unintended consequences, like hallucinations or inappropriate content. Developers can monitor applications and adjust policies in real time without needing to change code. “We’re excited to introduce one of our most significant Kong AI Gateway releases to date. With features like prompt compression, multimodal support and guardrails, version 3.11 gives teams the tools they need to build more capable AI systems—faster and with far less operational overhead. It’s a major step forward for any organization looking to scale AI reliably while keeping infrastructure costs under control,” said Marco Palladino, CTO and co-founder of Kong. The post Kong AI Gateway 3.11 introduces new method for reducing token costs appeared first on SD Times.
Twilio’s Event Triggered Journeys, OutSystem’s Agent Workbench, and more – Daily News Digest
- Latest News
- OutSystems
- Perforce
- Twilio
Twilio launches Event Triggered Journeys in Twilio Engage This new capability allows developers to incorporate personalized, scalable messaging into their applications by utilizing event data, user profile information, and context from the data warehouse, such as loyalty status or account details. According to Twilio, it can help with use cases like cart abandonment, ad suppression, … continue reading
The post Twilio’s Event Triggered Journeys, OutSystem’s Agent Workbench, and more – Daily News Digest appeared first on SD Times.
Twilio launches Event Triggered Journeys in Twilio Engage This new capability allows developers to incorporate personalized, scalable messaging into their applications by utilizing event data, user profile information, and context from the data warehouse, such as loyalty status or account details. According to Twilio, it can help with use cases like cart abandonment, ad suppression, onboarding flows, and trial-to-paid account journeys. Twilio also redesigned the Journeys user interface and code base and added new integrations for SendGrid and Twilio Messaging to make multi-channel journey orchestration easier. OutSystems launches Agent Workbench Agent Workbench, now in early access, allows companies to create agents that have enterprise-grade security and controls. Agents can integrate with custom AI models or third-party ones like Azure OpenAI or AWS Bedrock. It contains a unified data fabric for connecting to enterprise data sources, including existing OutSystems 11 data and actions, relational databases, data lakes, and knowledge retrieval systems like Azure AI Search. It comes with built in monitoring features, error tracing, and guardrails, providing insights into how AI agents are behaving throughout their lifecycle. Perforce launches Perfecto AI Perfecto AI is a testing model within Perfecto’s mobile testing platform that can generate tests, and adapts in real-time to UI changes, failures, and changing user flows. According to Perforce, Perfecto AI’s early testing has shown 50-70% efficiency gains in test creation, stabilization, and triage. “With this release, you can create a test before any code is written—true Test-Driven Development (TDD)—contextual validation of dynamic content like charts and images, and triage failures in real time—without the legacy baggage of scripts and frameworks,” said Stephen Feloney, VP of product management at Perforce. “Unlike AI copilots that simply generate scripts tied to fragile frameworks, Perforce Intelligence eliminates scripts entirely and executes complete tests with zero upkeep—eliminating rework, review, and risk.” The post Twilio’s Event Triggered Journeys, OutSystem’s Agent Workbench, and more – Daily News Digest appeared first on SD Times.
Harness Infrastructure as Code Management expands with features that facilitate better reusability
- Latest News
- harness
- IaC
- software development
Harness is expanding its Infrastructure as Code Management (IaCM) platform with two new features that should enable greater reusability. “During customer meetings one theme came up over and over again – the need to define infrastructure once and reuse it across the platform in a secure and consistent manner, at scale. Our latest expansion of … continue reading
The post Harness Infrastructure as Code Management expands with features that facilitate better reusability appeared first on SD Times.
Harness is expanding its Infrastructure as Code Management (IaCM) platform with two new features that should enable greater reusability. “During customer meetings one theme came up over and over again – the need to define infrastructure once and reuse it across the platform in a secure and consistent manner, at scale. Our latest expansion of Harness IaCM was built to solve exactly that,” Harness wrote in a blog post. The first new feature is Module Registry, which allows users to create, share, and manage templates for infrastructure components, like virtual machines, databases, and networks. It offers centralized storage, version management, granular access controls of who can access modules, integration into existing CI/CD workflows, and automatic syncing of modules to source repositories. The other new feature is Workspace Templates, allowing developers to predefine variables, configuration settings, and policies so that they can be reused as templates. Teams will be able to “start from template” to spin up new projects with their desired settings already in place, reducing manual effort, accelerating onboarding, and avoiding common misconfigurations. The company also revealed some of the items on the IaCM roadmap, including expanding support for IaC tools like Ansible and Terragrunt, adding reusable variable sets and a centralized provider registry to enable even more standardization, and improving how teams create and manage workspaces for testing, iteration, and experimentation. “Harness’ Infrastructure as Code Management (IaCM) was built to address a massive untapped opportunity: to merge automation with deep capabilities in compliance, governance, and operational efficiency and create a solution that redefines how infrastructure code is managed throughout its lifecycle. Since launch, we’ve continued to invest in that vision – adding powerful features to drive consistency, governance, and speed. And we’re just getting started,” Harness wrote. The post Harness Infrastructure as Code Management expands with features that facilitate better reusability appeared first on SD Times.
Amazon launches spec-driven AI IDE, Kiro
- Latest News
- AI
- Amazon
- kiro
Amazon is releasing a new AI IDE to rival platforms like Cursor or Windsurf. Kiro is an agentic editor that utilizes spec-driven development to combine “the flow of vibe coding” with “the clarity of specs.” According to Amazon, developers use specs for planning and clarity, and they can benefit agents in the same way. Specs … continue reading
The post Amazon launches spec-driven AI IDE, Kiro appeared first on SD Times.
Amazon is releasing a new AI IDE to rival platforms like Cursor or Windsurf. Kiro is an agentic editor that utilizes spec-driven development to combine “the flow of vibe coding” with “the clarity of specs.” According to Amazon, developers use specs for planning and clarity, and they can benefit agents in the same way. Specs in Kiro are artifacts that can be used whenever a feature needs to be thought through in-depth, to refactor work that requires upfront planning, or in situations when a developer wants to understand the behavior of a system. Kiro also features hooks, which the company describes as event-driven automations that trigger an agent to execute a task in the background. According to Amazon, Kiro hooks are sort of like an experienced developer catching the things you’ve missed or completing boilerplate tasks as you work. The basic workflow of building with Kiro specs and hooks consists of four steps. First, Kiro unpacks requirements from a single prompt and creates user stories that include Easy Approach to Requirements Syntax (EARS) notation acceptance criteria so that developers can verify that Kiro is building what they want. For example, the prompt “Add a review system for products” would lead to the creation of user stories for viewing, creating, filtering, and rating reviews. Next, it analyzes the existing codebase and spec requirements to create a design document that includes data flow diagrams, TypeScript interfaces, database schema, and API endpoints. Then, Kiro creates tasks and sub-tasks, and sequences them based on dependencies and links each to requirements. Each task will include details like unit tests, integration tests, loading states, mobile responsiveness, and accessibility requirements for implementation. Finally, hooks are executed when files are saved or created, such as updating a test file when a React component is saved or updating README files when API endpoints are changed. Kiro also includes features like MCP support, steering rules for AI behavior, and an agentic chat mode. “Our vision is to solve the fundamental challenges that make building software products so difficult—from ensuring design alignment across teams and resolving conflicting requirements, to eliminating tech debt, bringing rigor to code reviews, and preserving institutional knowledge when senior engineers leave. The way humans and machines coordinate to build software is still messy and fragmented, but we’re working to change that. Specs is a major step in that direction,” Kiro wrote in a blog post. The post Amazon launches spec-driven AI IDE, Kiro appeared first on SD Times.
Akka introduces platform for distributed agentic AI
- Latest News
- AI
- Akka
Akka, a company that provides solutions for building distributed applications, is introducing a new platform for scaling AI agents across distributed systems. “Agentic systems are forcing IT leaders to rethink their technology stack,” said Tyler Jewell, CEO of Akka. “IT systems must adapt from controlling predefined workflows to managing intelligent, adaptive systems operating in open-ended … continue reading
The post Akka introduces platform for distributed agentic AI appeared first on SD Times.
Akka, a company that provides solutions for building distributed applications, is introducing a new platform for scaling AI agents across distributed systems. “Agentic systems are forcing IT leaders to rethink their technology stack,” said Tyler Jewell, CEO of Akka. “IT systems must adapt from controlling predefined workflows to managing intelligent, adaptive systems operating in open-ended environments that include non-deterministic LLMs. Scaling these systems and providing dependable outputs is a tremendous challenge and redefines the meaning of an SLA. Akka is unique in that we’re bringing IT the tools to solve this issue at enterprise scale, with enterprise confidence.” Akka Agentic Platform consists of four integrated offerings: Akka Orchestration, Akka Agents, Akka Memory, and Akka Streaming. Akka Orchestration allows developers to guide, moderate, and control multi-agent systems. It offers fault-tolerant execution, enabling agents to reliably complete their tasks even if there are crashes, delays, or infrastructure failures. Akka Agents enables a design model and runtime for agentic systems, allowing creators to define how the agents gather context, reason, and act, while Akka handles everything else needed for them to run. Akka Memory is durable, in-memory, sharded data that can be used to provide agents context, retain history, and personalize behavior. Data stays within an organization’s infrastructure, and is replicated, shared, and rebalanced across Akka clusters. Akka Streaming offers continuous stream processing, aggregation, and augmentation of live data, metrics, audio, and video. Streams can be ingested from any source and they can stream between agents, Akka services, and external systems. Streamed inputs can trigger actions, update memory, or feed other Akka agents. The company offers a 99.9999% SLA for the Agentic Platform, enterprise-grade security (compliance with SOC 1 Type II, SOC 2 Type II, PCI-DSS Level 1, and ISO 27001), indemnification of Akka IP and third-party dependencies, and is source available through the Business Source License. The post Akka introduces platform for distributed agentic AI appeared first on SD Times.
