General

Principal Software Engineer @ Just Eat Takeaway. iOS Infrastructure Engineer. Based in London.
How to Implement a Decentralised CLI Tool Manager
- CLI manager
- tool
- executable
- manager
- swift
- cli
A design to implement a simple, generic and decentralised manager for CLI tools from the perspective of a Swift dev.
SPONSORED Based on this article, I've published Luca! A lightweight decentralised tool manager for macOS to manage project-specific tool environments. đ¶ Check it out: luca.tools Overview It's common for iOS teams to rely on various CLI tools such as SwiftLint, Tuist, and Fastlane. These tools are often installed in different ways. The most common way is to use Homebrew, which is known to lack version pinning and, as Pedro puts it: Homebrew is not able to install and activate multiple versions of the same tool I also fundamentally dislike the tap system for installing dependencies from third-party repositories. Although I don't have concrete data, I feel that most development teams profoundly dislike Homebrew when used beyond the simple installation of individual tools from the command line and the brew taps system is cumbersome and bizarre enough to often discourage developers from using it. Alternatives to manage sets of CLI tools that got traction in the past couple of years are Mint and Mise. As Pedro again says in his article about Mise: The first and most core feature of Mise is the ability to install and activate dev tools. Note that we say "activate" because, unlike Homebrew, Mise differentiates between installing a tool and making a specific version of it available. While beyond the scope of this article, I recommend a great article about installing Swift executables from source with Mise by Natan Rolnik. In this article I describe a CLI tool manager very similar to what I've implemented for my team. I'll simply call it "ToolManager". The tool is designed to: Support installing any external CLI tool distributed in zip archives Support activating specific versions per project Be decentralised (requiring no registry) I believe the decentralisation is an interesting aspect and makes the tool reusable in any development environment. Also, differently from the design of mise and mint, ToolManager doesn't build from source and rather relies on pre-built executables. In the age of GenAI, it's more important than ever to develop critical thinking and learn how to solve problems. For this reason, I won't show the implementation of ToolManager, as it's more important to understand how it's meant to work. The code you'll see in this article supports the overarching design, not the nitty-gritty details of how ToolManager's commands are implemented. If, by the end of the article, you understand how the system should work and are interested in implementing it (perhaps using GenAI), you should be able to convert the design to code fairly easilyâhopefully, without losing the joy of coding. I myself am considering implementing ToolManager as an open source project later, as I believe it might be very helpful to many teams, just as its incarnation was (and continues to be) for the platform team at JET. There doesn't seem to be an existing tool with the design described in this article. A different title could have reasonably placed this article in "The easiest X" "series" (1, 2, 3, 4), if I may say so. Design The point here is to learn what implementing a tool manager entails. I'll therefore describe the MVP of ToolManager, leaving out details that would make the design too straightforward to implement. The tool itself is a CLI and it's reasonably implemented in Swift using ArgumentParser like all modern Swift CLI tools are. In its simplest form, ToolManager exposes 3 commands: install: download and installs the tools defined in a spec file (Toolfile.yml) at ~/.toolManager/tools optionally validating the checksum creates symlinks to the installed versions at $(PWD)/.toolManager/active uninstall: clears the entire or partial content of ~/.toolManager/tools clears the content of $(PWD)/.toolManager/active version: returns the version of the tool The install commands allows to specify the location of the spec file using the --spec flag, which defaults to Toolfile.yml in the current directory. The installation of ToolManager should be done in the most raw way, i.e. via a remote script. It'd be quite laughable to rely on Brew, wouldn't it? This practice is commonly used by a variety of tools, for example originally by Tuist (before the introduction of Mise) and... you guessed it... by Brew. We'll see below a basic script to achieve so that you could host on something lik AWS S3 with the desired public permissions. The installation command would be: curl -Ls 'https://my-bucket.s3.eu-west-1.amazonaws.com/install_toolmanager.sh' | bash The version of ToolManager must be defined in the .toolmanager-version file in order for the installation script of the repo to work: echo "1.2.0" > .toolmanager-version ToolManager manages versions of CLI tools but it's not in the business of managing its own versions. Back in the day, Tuist used to use tuistenv to solve this problem. I simply avoid it and have single version of ToolManager available at /usr/local/bin/ that the installation script overrides with the version defined for the project. The version command is used by the script to decide if a download is needed. There will be only one version of ToolManager in the system at a given time, and that's absolutely OK. At this point, it's time to show an example of installation script: #!/bin/bash set -euo pipefail # Fail fast if essential commands are missing. command -v curl >/dev/null || { echo "curl not found, please install it."; exit 1; } command -v unzip >/dev/null || { echo "unzip not found, please install it."; exit 1; } readonly EXEC_NAME="ToolManager" readonly INSTALL_DIR="/usr/local/bin" readonly EXEC_PATH="$INSTALL_DIR/$EXEC_NAME" readonly HOOK_DIR="$HOME/.toolManager" readonly REQUIRED_VERSION=$(cat .toolmanager-version) # Exit if the version file is missing or empty. if [[ -z "$REQUIRED_VERSION" ]]; then echo "Error: .toolmanager-version not found or is empty." >&2 exit 1 fi # Exit if the tool is already installed and up to date. if [[ -f "$EXEC_PATH" ]] && [[ "$($EXEC_PATH version)" == "$REQUIRED_VERSION" ]]; then echo "$EXEC_NAME version $REQUIRED_VERSION is already installed." exit 0 fi # Determine OS and the corresponding zip filename. case "$(uname -s)" in Darwin) ZIP_FILENAME="$EXEC_NAME-macOS.zip" ;; Linux) ZIP_FILENAME="$EXEC_NAME-Linux.zip" ;; *) echo "Unsupported OS: $(uname -s)" >&2; exit 1 ;; esac # Download and install in a temporary directory. TMP_DIR=$(mktemp -d) trap 'rm -rf "$TMP_DIR"' EXIT # Ensure cleanup on script exit. echo "Downloading $EXEC_NAME ($REQUIRED_VERSION)..." DOWNLOAD_URL="https://github.com/MyOrg/$EXEC_NAME/releases/download/$REQUIRED_VERSION/$ZIP_FILENAME" curl -LSsf --output "$TMP_DIR/$ZIP_FILENAME" "$DOWNLOAD_URL" unzip -o -qq "$TMP_DIR/$ZIP_FILENAME" -d "$TMP_DIR" # Use sudo only when the install directory is not writable. SUDO_CMD="" if [[ ! -w "$INSTALL_DIR" ]]; then SUDO_CMD="sudo" fi echo "Installing $EXEC_NAME to $INSTALL_DIR..." $SUDO_CMD mkdir -p "$INSTALL_DIR" $SUDO_CMD mv "$TMP_DIR/$EXEC_NAME" "$EXEC_PATH" $SUDO_CMD chmod +x "$EXEC_PATH" # Download and source the shell hook to complete installation. echo "Installing shell hook..." mkdir -p "$HOOK_DIR" curl -LSsf --output "$HOOK_DIR/shell_hook.sh" "https://my-bucket.s3.eu-west-1.amazonaws.com/shell_hook.sh" # shellcheck source=/dev/null source "$HOOK_DIR/shell_hook.sh" echo "Installation complete." You might have noticed that: the required version of ToolManager (defined in .toolmanager-version) is downloaded from the release from the corresponding GitHub repository if missing locally. The ToolManager repo should have a GHA workflow in place to build, archive and upload the version. a shell_hook script is downloaded and run to insert the following line in the shell profile: [[ -s "$HOME/.toolManager/shell_hook.sh" ]] && source "$HOME/.toolManager/shell_hook.sh". This allows switching location in the terminal and loading the active tools for the current project. Showing an example of shell_hook.sh is in order: #!/bin/bash # Overrides 'cd' to update PATH when entering a directory with a local tool setup. # Add the project-specific bin directory to PATH if it exists. update_tool_path() { local tool_bin_dir="$PWD/.toolManager/active" if [[ -d "$tool_bin_dir" ]]; then export PATH="$tool_bin_dir:$PATH" fi } # Redefine 'cd' to trigger the path update after changing directories. cd() { builtin cd "$@" || return update_tool_path } # --- Installation Logic --- # The following function only runs when this script is sourced by an installer. install_hook() { local rc_file case "${SHELL##*/}" in bash) rc_file="$HOME/.bashrc" ;; zsh) rc_file="$HOME/.zshrc" ;; *) echo "Unsupported shell for hook installation: $SHELL" >&2 return 1 ;; esac # The line to add to the shell's startup file. local hook_line="[[ -s \"$HOME/.toolManager/shell_hook.sh\" ]] && source \"$HOME/.toolManager/shell_hook.sh\"" # Add the hook if it's not already present. if ! grep -Fxq "$hook_line" "$rc_file" &>/dev/null; then printf "\n%s\n" "$hook_line" >> "$rc_file" echo "Shell hook installed in $rc_file. Restart your shell to apply changes." fi } # This check ensures 'install_hook' only runs when sourced, not when executed. if [[ "${BASH_SOURCE[0]}" != "$0" ]]; then install_hook fi Now that we have a working installation of ToolManager, let define our Toolfile.yml in our project folder: --- tools: - name: PackageGenerator binaryPath: PackageGenerator version: 3.3.0 zipUrl: https://github.com/justeattakeaway/PackageGenerator/releases/download/3.3.0/PackageGenerator-macOS.zip - name: SwiftLint binaryPath: swiftlint version: 0.57.0 zipUrl: https://github.com/realm/SwiftLint/releases/download/0.58.2/portable_swiftlint.zip - name: ToggleGen binaryPath: ToggleGen version: 1.0.0 zipUrl: https://github.com/TogglesPlatform/ToggleGen/releases/download/1.0.0/ToggleGen-macOS-universal-binary.zip - name: Tuist binaryPath: tuist version: 4.48.0 zipUrl: https://github.com/tuist/tuist/releases/download/4.54.3/tuist.zip - name: Sourcery binaryPath: bin/sourcery version: 2.2.5 zipUrl: https://github.com/krzysztofzablocki/Sourcery/releases/download/2.2.5/sourcery-2.2.5.zip The install command of ToolManager loads the Toolfile at the root of the repo and for each defined dependency, performs the following: checks if the version of the dependency already exists on the machine if it doesnât exist, downloads it, unzips it, and places the binary at ~/.toolManager/tools/ (e.g. ~/.toolManager/tools/PackageGenerator/3.3.0/PackageGenerator) creates a symlink to the binary in the project directory from .toolManager/active (e.g. .toolManager/active/PackageGenerator) After running ToolManager install (or ToolManager install --spec=Toolfile.yml), ToolManager should produce the following structure ~ tree ~/.toolManager/tools -L 2 âââ PackageGenerator â âââ 3.3.0 âââ Sourcery â âââ 2.2.5 âââ SwiftLint â âââ 0.57.0 âââ ToggleGen â âââ 1.0.0 âââ Tuist âââ 4.48.0 and from the project folder ls -la .toolManager/active <redacted> PackageGenerator -> /Users/alberto/.toolManager/tools/PackageGenerator/3.3.0/PackageGenerator <redacted> Sourcery -> /Users/alberto/.toolManager/tools/Sourcery/2.2.5/Sourcery <redacted> SwiftLint -> /Users/alberto/.toolManager/tools/SwiftLint/0.57.0/SwiftLint <redacted> ToggleGen -> /Users/alberto/.toolManager/tools/ToggleGen/1.0.0/ToggleGen <redacted> Tuist -> /Users/alberto/.toolManager/tools/Tuist/4.48.0/Tuist Bumping the versions of some tools in the Toolfile, for example SwiftLint and Tuist, and re-running the install command, should result in the following: ~ tree ~/.toolManager/tools -L 2 âââ PackageGenerator â âââ 3.3.0 âââ Sourcery â âââ 2.2.5 âââ SwiftLint â âââ 0.57.0 â âââ 0.58.2 âââ ToggleGen â âââ 1.0.0 âââ Tuist âââ 4.48.0 âââ 4.54.3 ls -la .toolManager/active <redacted> PackageGenerator -> /Users/alberto/.toolManager/tools/PackageGenerator/3.3.0/PackageGenerator <redacted> Sourcery -> /Users/alberto/.toolManager/tools/Sourcery/2.2.5/Sourcery <redacted> SwiftLint -> /Users/alberto/.toolManager/tools/SwiftLint/0.58.2/SwiftLint <redacted> ToggleGen -> /Users/alberto/.toolManager/tools/ToggleGen/1.0.0/ToggleGen <redacted> Tuist -> /Users/alberto/.toolManager/tools/Tuist/4.54.3/Tuist CI Setup On CI, the setup is quite simple. It involves 2 steps: install ToolManager install the tools The commands can be wrapped in GitHub composite actions: name: Install ToolManager runs: using: composite steps: - name: Install ToolManager shell: bash run: curl -Ls 'https://my-bucket.s3.eu-west-1.amazonaws.com/install_toolmanager.sh' | bash name: Install tools inputs: spec: description: The name of the ToolManager spec file required: false default: Toolfile.yml runs: using: composite steps: - name: Install tools shell: bash run: | ToolManager install --spec=${{ inputs.spec }} echo "$PWD/.toolManager/active" >> $GITHUB_PATH simply used in workflows: - name: Install ToolManager uses: ./.github/actions/install-toolmanager - name: Install tools uses: ./.github/actions/install-tools with: spec: Toolfile.yml CLI tools conformance ToolManager can install tools that are made available in zip files, without the need of implementing any particular spec. Depending on the CLI tool, the executable can be at the root of the zip archive or in a subfolder. Sourcery for example places the executable in the bin folder. - name: Sourcery binaryPath: bin/sourcery version: 2.2.5 zipUrl: https://github.com/krzysztofzablocki/Sourcery/releases/download/2.2.5/sourcery-2.2.5.zip GitHub releases are great to host releases as zip files and that's all we need. Ideally, one should decorate the repositories with appropriate release workflows. Following is a simple example that builds a macOS binary. It could be extended to also create a Linux binary. name: Publish Release on: push: tags: - '*' env: CLI_NAME: my-awesome-cli-tool permissions: contents: write jobs: build-and-archive: name: Build and Archive macOS Binary runs-on: macos-latest steps: - name: Checkout repository uses: actions/checkout@v4 - name: Setup Xcode uses: maxim-lobanov/setup-xcode@v1 with: xcode-version: '16.4' - name: Build universal binary run: swift build -c release --arch arm64 --arch x86_64 - name: Archive the binary run: | cd .build/apple/Products/Release/ zip -r "${{ env.CLI_NAME }}-macOS.zip" "${{ env.CLI_NAME }}" - name: Upload artifact for release uses: actions/upload-artifact@v4 with: name: cli-artifact path: .build/apple/Products/Release/${{ env.CLI_NAME }}-macOS.zip create-release: name: Create GitHub Release needs: [build-and-archive] runs-on: ubuntu-latest steps: - name: Download CLI artifact uses: actions/download-artifact@v4 with: name: cli-artifact - name: Create Release and Upload Asset uses: softprops/action-gh-release@v2 with: files: "${{ env.CLI_NAME }}-macOS.zip" A note on version pinning Dependency management systems tend to use a lock file (like Package.resolved in Swift Package manager, Podfile.lock in the old days of CocoaPods, yarn.lock/package-lock.json in JavaScript, etc.). The benefits of using a lock file are mainly 2: Reproducibility It locks the exact versions (including transitive dependencies) so that every team member, CI server, or production environment installs the same versions. Faster installs Dependency managers can skip version resolution if a lock file is present, using it directly to fetch the exact versions, improving speed. We can remove the need for lock files if we pin the versions in the spec (the file defining the tools). If version range operators like the CocoaPods' optimistic operator ~> and the SPM's .upToNextMajor and similar one didn't exist, usages of lock files would lose its utility. While useful, lock files are generally annoying and can create that odd feeling of seeing unexpected updates in pull requests made by others. ToolManager doesn't use a lock file; instead, it requires teams to pin their tools' versions, which I strongly believe is a good practice. This approach comes at the cost of teams having to keep an eye out for patch releases and not leaving updates to the machine, which risks pulling in dependencies that don't respect Semantic Versioning (SemVer). Support for different architectures This design allows to support different architectures. Some CI workflows might only need a Linux runner to reduce the burden on precious macOS instances. Both macOS and Linux can be supported with individual Toolfile that can be specified when running the install command. # on macOS ToolManager install --spec=Toolfile_macOS # on Linux ToolManager install --spec=Toolfile_Linux Conclusion The design described in this article powers the solution implemented at JET and has served our teams successfully since October 2023. JET has always preferred to implement in-house solutions where possible and sensible, and I can say that moving away from Homebrew was a blessing. With this design, the work usually done by a package manager and a central spec repository is shifted to individual components that are only required to publish releases in zip archives, ideally via a release workflow. By decentralising and requiring version pinning, we made ToolManager a simple yet powerful system for managing the installation of CLI tools.

How to setup a Swift Package Registry in Artifactory
- swift
- registry
- artifactory
- package
A quick guide to setting up a Swift Package Registry with Artifactory to speed up builds and streamline dependency management.
Introduction It's very difficult to have GenAI not hallucinate when in comes to Swift Package Registry. No surprise there: the feature is definitely niche, has not been vastly adopted and there's a lack of examples online. As Dave put it, Swift Package Registries had an even rockier start compared to SPM. I've recently implemented a Swift Package Registry on Artifactory for my team and I thought of summarising my experience here since it's still fresh in my head. While some details are left out, the happy path should be covered. I hope with this article to help you all indirectly by providing more material to the LLMs overlords. Problem The main problem that led us to look into Swift Package Registry is due to SPM deep-cloning entire Git repositories for each dependency, which became time-consuming. Our CI jobs took a few minutes just to pull all the Swift packages. For dependencies with very large repositories, such as SendbirdUIKit (which is more than 2GB), one could rely on pre-compiled XCFrameworks as a workaround. Airbnb provides a workaround via the SPM-specific repo for Lottie. A Swift Registry allows to serve dependencies as zip artifacts containing only the required revision, avoiding the deep clone of the git repositories. What is a Swift Package Registry? A Swift Package Registry is a server that stores and vends Swift packages by implementing SE-0292 and the corresponding specification. Instead of relying on Git repositories to source our dependencies, we can use a registry to download them as versioned archives (zip files). swift-package-manager/Documentation/PackageRegistry/PackageRegistryUsage.md at main · swiftlang/swift-package-manager The Package Manager for the Swift Programming Language - swiftlang/swift-package-manager GitHubswiftlang The primary advantages of using a Swift Package Registry are: Reduced CI/CD Pipeline Times: by fetching lightweight zip archives from the registry rather than cloning the entire repositories from GitHub. Improved Developer Machine Performance: the same time savings on CI are reflected on the developers' machines during dependency resolution. Availability: by hosting a registry, teams are no longer dependent on the availability of external source control systems like GitHub, but rather on internal ones (for example, self-hosted Artifactory). Security: injecting vulnerabilities in popular open-source projects is known as a supply chain attack and has become increasingly popular in recent years. A registry allows to adopt a process to trust the sources published on it. Platforms Apple has accepted the Swift Registry specification and implemented support to interact with registries within SPM but has left the implementation of actual registries to third-party platforms. Apple is not in the business of providing a Swift Registry. The main platform having adopted Swift Registries is Artifactory. Artifactory, Your Swift Package Repository JFrog now offers the first and only Swift binary package repository, enabling developers to use JFrog Artifactory for resolving Swift dependencies instead of enterprise source control (Git) systems. JFroggiannit although AWS CodeArtifact, Cloudsmith and Tuist provide support too: New â Add Your Swift Packages to AWS CodeArtifact | Amazon Web Services Starting today, Swift developers who write code for Apple platforms (iOS, iPadOS, macOS, tvOS, watchOS, or visionOS) or for Swift applications running on the server side can use AWS CodeArtifact to securely store and retrieve their package dependencies. CodeArtifact integrates with standard developer tools such as Xcode, xcodebuild, and the Swift Package Manager (the swift [âŠ] Amazon Web ServicesSĂ©bastien Stormacq Private, secure, hosted Swift registry Cloudsmith offers secure, private Swift registries as a service, with cloud native performance. Book a demo today. Cloudsmith Announcing Tuist Registry Weâre thrilled to announce the launch of the Tuist Registry â a new feature that optimizes the resolution of Swift packages in your projects. TuistMarek FoĆt The benefits are usually appealing to teams with large apps, hence it's reasonable to believe that only big companies have looked into adopting a registry successfully. Artifactory Setup Let's assume a JFrog Artifactory to host our Swift Package Registry exists at https://packages.acme.com. Artifactory support local, remote, and virtual repositories but a realistic setup consists of only local and virtual repositories. Source: Artifactory Local Repositories are meant to be used for publishing dependencies from CI pipelines. Virtual Repositories are instead meant to be used for resolving (pulling) dependencies on both CI and the developers' machines. Remote repositories are not really relevant in a typical Swift Registry setup. Following the documentation at https://jfrog.com/help/r/jfrog-artifactory-documentation/set-up-a-swift-registry, let's create 2 repositories with the following names: local repository: swift-local virtual repository: swift-virtual Local Setup To pull dependencies from the Swift Package Registry, we need to configure the local environment. 1. Set the Registry URL First, we need to inform SPM about the existence of the registry. We can do this on a per-project basis or globally for the user account. From a package's root directory, run the following command. This will create a .swiftpm/configuration/registries.json file within your project folder. swift package-registry set "https://packages.acme.com/artifactory/api/swift/swift-virtual" The resulting registries.json file will look like this: { "authentication": {}, "registries": { "[default]": { "supportsAvailability": false, "url": "https://packages.acme.com/artifactory/api/swift/swift-virtual" } }, "version": 1 } To set the registry for all your projects, use the --global flag. swift package-registry set --global "https://packages.acme.com/artifactory/api/swift/swift-virtual" This will create the configuration file at ~/.swiftpm/configuration/registries.json. Xcode projects don't support project-level registries nor (in my experience) support scopes other than the default one (i.e. avoid using the --scope flag). 2. Authentication To pull packages, authenticating with Artifactory is usually required. It's reasonable though that your company allows all artifacts from Artifactory to be read without authentication as long as one is connected to the company VPN. In cases where authentication is required, SPM uses a .netrc file in the home directory to find credentials for remote servers. This file is a standard way to handle login information for various network protocols. Using a token generated from the Artifactory dashboard, the line to add to the .netrc file would be: machine packages.acme.com login <your_artifactory_username> password <your_artifactory_token> Alternatively, it's possible to log in using the swift package-registry login command. This command securely stores your token in the system's keychain. swift package-registry login "https://packages.acme.com/artifactory/api/swift/swift-virtual" \ --token <token> # or swift package-registry login "https://packages.acme.com/artifactory/api/swift/swift-virtual" \ --username <username> \ --password <token_treated_as_password> CI/CD Setup On CI, the setup is slightly different as the goals are: to resolve dependencies in CI/CD jobs to publish new package versions in CD jobs for both internal and external dependencies The steps described for the local setup are valid for the resolution on CI too. The interesting part here is how publishing is done. I will assume the usage of GitHub Actions. 1. Retrieving the Artifactory Token The JFrog CLI can be used via the setup-jfrog-cli action to authenticate using the most appropriate method. You might want to wrap the action in a custom composable one exporting the token as the output of a step: TOKEN=$(jf config export) echo "::add-mask::$TOKEN" echo "artifactory-token=$TOKEN">> "$GITHUB_OUTPUT" 2. Logging into the Registry The CI job must log in to the local repository (swift-local) to gain push permissions. The token retrieved in the previous step is used for this purpose. swift package-registry login \ "https://packages.acme.com/artifactory/api/swift/swift-local" \ --token ${{ steps.get-token.outputs.artifactory-token }} 3. Publishing Packages Swift Registry requires archives created with the swift package archive-source command from the dependency folder. E.g. swift package archive-source -o "Alamofire-5.10.2.zip" We could avoid creating the archive and instead download it directly from GitHub releases. curl -L -o Alamofire-5.10.1.zip \ https://github.com/Alamofire/Alamofire/archive/refs/tags/5.10.1.zip Uploading the archive can then be done by using the JFrog CLI that needs customization via the setup-jfrog-cli action. If going down this route, the upload command would be: jf rt upload Alamofire-5.10.1.zip \ https://packages.acme.com/artifactory/api/swift/swift-local/acme/Alamofire/Alamofire-5.10.1.zip There is a specific structure to respect: <REPOSITORY>/<SCOPE>/<NAME>/<NAME>-<VERSION>.zip which is the last part of the above URL: swift-local/acme/Alamofire/Alamofire-5.10.1.zip Too bad that using the steps above causes a downstream problem with SPM not being able to resolve the dependencies in the registry. I tried extensively and couldn't find the reason why SPM wouldn't be happy with how the packages were published. I might have missed something but eventually I necessarily had to switch to use the publish command. Using the swift package-registry publish command instead, doesn't present this issue hence it's the solution adopted in this workflow. swift package-registry publish acme.Alamofire 5.10.1 \ --url https://packages.acme.com/artifactory/api/swift/swift-local \ --scratch-directory $(mktemp -d) To verify the upload and indexing succeeded, check that the uploaded *.zip artifact is available and that the .swift exists (indication that the indexing has occurred). If the specific structure is not respected, the .swift folder wouldn't be generated. Consuming Packages from the Registry Packages The easiest and only documented way to consume a package from a registry is via a Package. In the Package.swift file, declare dependencies using the .package(id:from:) syntax to declare a registry-based dependency. The id is a combination of the scope and the package name. ... dependencies: [ .package(id: "acme.Alamofire", from: "5.10.1"), ], targets: [ .target( name: "MyApp", dependencies: [ .product(name: "Alamofire", package: "acme.Alamofire"), ] ), ... ] ) Run swift package resolve or simply build the Package in Xcode to pull the dependencies. You might bump into transitive dependencies (i.e. dependencies listed in the Package.swift files of the packages published on the registry) pointing to GitHub. In this case, it'd be great to instruct SPM to use the corresponding versions on the registry. The --replace-scm-with-registry flag is designed to work for the entire dependency graph, including transitive dependencies. The cornerstone of associating a registry-hosted package with its GitHub origin is the package-metadata.json file. This file allows to provide essential metadata about the packages at the time of publishing (the --metadata-path flag of the publish command defaults to pacakge-metadata.json). Crucially, it includes a field to specify the source control repository URLs. When swift package resolve --replace-scm-with-registry is executed, SPM queries the configured registry. The registry then uses the information from the package-metadata.json to map the package identity to its corresponding GitHub URL, enabling a smooth and transparent resolution process. The metadata file must conform to the JSON schema defined in SE-0391. It is recommended to include all URL variations (e.g., SSH, HTTPS) for the same repository. E.g. { "repositoryURLs": [ "https://github.com/Alamofire/Alamofire", "https://github.com/Alamofire/Alamofire.git", "git@github.com:Alamofire/Alamofire.git" ] } Printing the dependencies should confirm the source of the dependencies: swift package show-dependencies --replace-scm-with-registry When loading a package with Xcode, the flag can be enabled via an environment variable in the scheme IDEPackageDependencySCMToRegistryTransformation=useRegistryIdentityAndSources Too bad that for packages, the schemes won't load until SPM completes the resolution hence running the following from the terminal would address the issue: defaults write com.apple.dt.Xcode IDEPackageDependencySCMToRegistryTransformation useRegistryIdentityAndSources that can be unset with: defaults delete com.apple.dt.Xcode IDEPackageDependencySCMToRegistryTransformation Xcode It's likely that you'll want to use the registry from Xcode projects for direct dependencies. If using the Tuist registry, it seems you would be able to leverage a Package Collection to add dependencies from the registry from the Xcode UI. Note that until Xcode 26 Beta 1, it's not possible to add registry dependencies directly in the Xcode UI, but if you use Tuist to generate your project (as you should), you can use the Package.registry (introduced with https://github.com/tuist/tuist/pull/7225). E.g. let project = Project( ... packages: [ .registry( identifier: "acme.Alamofire", requirement: .exact(Version(stringLiteral: "5.10.1")) ) ], ... ) If not using Tuist, you'd have to rely on setting IDEPackageDependencySCMToRegistryTransformation either as an environment variable in the scheme or globally via the terminal. You can also use xcodebuild to resolve dependencies using the correct flag: xcodebuild \ -resolvePackageDependencies \ -packageDependencySCMToRegistryTransformation useRegistryIdentityAndSources Conclusions Weâve found that using an in-house Swift registry drastically reduces dependency resolution time and size on disk by downloading only the required revision instead of the entire, potentially large, Git repository. This improvement benefits both CI pipelines and developersâ local environments. Additionally, registries help mitigate the risk of supply chain attacks. As of this writing, Swift registries are not widely adopted, which is reflected in the limited number of platforms that support them. It also shows various bugs I myself bumped into when using particular configurations. source: https://forums.swift.org/t/package-registry-support-in-xcode/73626/19 It's unclear whether adoption will grow and uncertain if Apple will ever address the issues reported by the community, but when a functioning setup is put in place, registries offer an efficient and secure alternative to using XCFrameworks in production builds and reduce both memory and time footprints.

Scalable Continuous Integration for iOS
- CI
- mobile
- iOS
- AWS
- macOS
How Just Eat Takeaway.com leverage AWS, Packer, Terraform and GitHub Actions to manage a CI stack of macOS runners.
Originally published on the Just Eat Takeaway Engineering Blog. How Just Eat Takeaway.com leverage AWS, Packer, Terraform and GitHub Actions to manage a CI stack of macOS runners. Problem At Just Eat Takeaway.com (JET), our journey through continuous integration (CI) reflects a landscape of innovation and adaptation. Historically, JETâs multiple iOS teams operated independently, each employing their distinct CI solutions. The original Just Eat iOS and Android teams had pioneered an in-house CI solution anchored in Jenkins. This setup, detailed in our 2021 article, served as the backbone of our CI practices up until 2020. It was during this period that the iOS team initiated a pivotal migration: moving from in-house Mac Pros and Mac Minis to AWS EC2 macOS instances. Fast forward to 2023, a significant transition occurred within our Continuous Delivery Engineering (CDE) Platform Engineering team. The decision to adopt GitHub Actions company-wide has marked the end of our reliance on Jenkins while other teams are in the process of migrating away from solutions such as CircleCI and GitLab CI. This transition represented a fundamental shift in our CI philosophy. By moving away from Jenkins, we eliminated the need to maintain an instance for the Jenkins server and the complexities of managing how agents connected to it. Our focus then shifted to transforming our Jenkins pipelines into GitHub Actions workflows. This transformation extended beyond mere tool adoption. Our primary goal was to ensure that our macOS instances were not only scalable but also configured in code. We therefore enhanced our global CI practices and set standards across the entire company. Desired state of CI As we embarked on our journey to refine and elevate our CI process, we envisioned a state-of-the-art CI system. Our goals were ambitious yet clear, focusing on scalability, automation, and efficiency. At the time of implementing the system, no other player in the industry seemed to have implemented the complete solution we envisioned. Below is a summary of our desired CI state: Instance setup in code: One primary objective was to enable the definition of the setup of the instances entirely in code. This includes specifying macOS version, Xcode version, Ruby version, and other crucial configurations. For this purpose, the HashiCorp tool Packer, emerged once again as an ideal solution, offering the flexibility and precision we required. IaC (Infrastructure as Code) for macOS instances: To define the infrastructure of our fleet of macOS instances, we leaned towards Terraform, another HashiCorp tool. Terraform provided us with the capability to not only deploy but also to scale and migrate our infrastructure seamlessly, crucially maintaining its state. Auto and Manual Scaling: We wanted the ability to dynamically create CI runners based on demand, ensuring that resources were optimally utilized and available when needed. To optimize resource utilization, especially during off-peak hours, we desired an autoscaling feature. Scaling down our CI runners on weekends when developer activity is minimal was critical to be cost-effective. Automated Connection to GitHub Actions: We aimed for the instances to automatically connect to GitHub Actions as runners upon deployment. This automation was crucial in eliminating manual interventions via SSH or VNC. Multi-Team Use: Our vision included CI runners that could be easily used by multiple teams across different time zones. This would not only maximize the utility of our infrastructure but also encourage reuse and standardization. Centralized Management via GitHub Actions: To further streamline our CI processes, we intended to run all tasks through GitHub Actions workflows. This approach would allow the teams to self-serve and alleviate the need for developers to use Docker or maintain local environments. Getting to the desired state was a journey that presented multiple challenges and constant adjustments to make sure we could migrate smoothly to a new system. Instance setup in code We implemented the desired configuration with Packer leveraging a number of Shell Provisioners and variables to configure the instance. Here are some of the configuration steps: Set user password (to allow remote desktop access) Resize the partition to use all the space available on the EBS volume Start the Apple Remote Desktop agent and enable remote desktop access Update Brew & Install Brew packages Install CloudWatch agent Install rbenv/Ruby/bundler Install Xcode versions Install GitHub Actions actions-runner Copy scripts to connect to GitHub Actions as a runner Copy daemon to start the GitHub Actions self-hosted runner as a service Set macos-init modules to perform provisioning of the first launch While the steps above are naturally configuration steps to perform when creating the AMI, the macos-init modules include steps to perform once the instance becomes available. The create_ami workflow accepts inputs that are eventually passed to Packer to generate the AMI. packer build \ --var ami_name_prefix=${{ env.AMI_NAME_PREFIX }} \ --var region=${{ env.REGION }} \ --var subnet_id=${{ env.SUBNET_ID }} \ --var vpc_id=${{ env.VPC_ID }} \ --var root_volume_size_gb=${{ env.ROOT_VOLUME_SIZE_GB }} \ --var macos_version=${{ inputs.macos-version}} \ --var ruby_version=${{ inputs.ruby-version }} \ --var xcode_versions='${{ steps.parse-xcode-versions.outputs.list }}' \ --var gha_version=${{ inputs.gha-version}} \ bare-metal-runner.pkr.hcl Different teams often use different versions of software, like Xcode. To accommodate this, we permit multiple versions to be installed on the same instance. The choice of which version to use is then determined within the GitHub Actions workflows. The seamless generation of AMIs has proven to be a significant enabler. For example, when Xcode 15.1 was released, we executed this workflow the same evening. In just over two hours, we had an AMI ready to deploy all the runners (it usually takes 70â100 minutes for a macOS AMI with 400GB of EBS volume to become ready after creation). This efficiency enabled our teams to use the new Xcode version just a few hours after its release. IaC (Infrastructure as Code) for macOS instances Initially, we used distinct Terraform modules for each instance to facilitate the deployment and decommissioning of each one. Given the high cost of EC2 Mac instances, we managed this process with caution, carefully balancing host usage while also being mindful of the 24-hour minimum allocation time. We ultimately ended up using Terraform to define a single infrastructure (i.e. a single Terraform module) defining resources such as: aws_key_pair, aws_instance, aws_ami aws_security_group, aws_security_group_rule aws_secretsmanager_secret aws_vpc, aws_subnet aws_cloudwatch_metric_alarm aws_sns_topic, aws_sns_topic_subscription aws_iam_role, aws_iam_policy, aws_iam_role_policy_attachment, aws_iam_instance_profile A crucial part was to use count in aws_instance, setting the value of a variable passed in from deploy_infra workflow. Terraform performs the necessary scaling upon changing the value. We have implemented a workflow to perform Terraform apply and destroy commands for the infrastructure. Only the AMI and the number of instances are required as inputs. terraform ${{ inputs.command }} \ --var ami_name=${{ inputs.ami-name }} \ --var fleet_size=${{ inputs.fleet-size }} \ --auto-approve Using the name of the AMI instead of the ID allows us to use the most recent one that was generated, useful in case of name clashes. variable "ami_name" { type = string } variable "fleet_size" { type = number } data "aws_ami" "bare_metal_gha_runner" { most_recent = true filter { name = "name" values = ["${var.ami_name}"] } ... } resource "aws_instance" "bare_metal" { count = var.fleet_size ami = data.aws_ami.bare_metal_gha_runner.id instance_type = "mac2.metal" tenancy = "host" key_name = aws_key_pair.bare_metal.key_name ... } Instead of maintaining multiple CI instances with varying software configurations, we concluded that itâs simpler and more efficient to have a single, standardised setup. While teams still have the option to create and deploy their unique setups, a smaller, unified system allows for easier support by a single global configuration. Auto and Manual Scaling The deploy_infra workflow allows us to scale on demand but it doesnât release the underlying dedicated hosts which are the resources that are ultimately billed. The autoscaling solution provided by AWS is great for VMs but gets sensibly more complex when actioned on dedicated hosts. Auto Scaling groups on macOS instances would require a Custom Managed License, a Host Resource Group and, of course, a Launch Template. Using only AWS services appears to be a lot of work to pull things together and the result wouldnât allow for automatic release of the dedicated hosts. AirBnb mention in their Flexible Continuous Integration for iOS article that an internal scaling service was implemented: An internal scaling service manages the desired capacity of each environmentâs Auto Scaling group. Some articles explain how to set up Auto Scaling groups for mac instances (see 1 and 2) but after careful consideration, we agreed that implementing a simple scaling service via GitHub Actions (GHA) was the easiest and most maintainable solution. We implemented 2 GHA workflows to fully automate the weekend autoscaling: Upscaling workflow to n, triggered at a specific time at the beginning of the working week Downscaling workflow to 1, triggered at a specific time at the beginning of the weekend We retain the capability for manual scaling, which is essential for situations where we need to scale down, such as on bank holidays, or scale up, like on release cut days, when activity typically exceeds the usual levels. Additionally, we have implemented a workflow that runs multiple times a day and tries to release all available hosts without an instance attached. This step lifts us from the burden of having to remember to release the hosts. Dedicated hosts take up to 110 minutes to move from the Pending to the Available state due to the scrubbing workflow performed by AWS. Manual scaling can be executed between the times the autoscaling workflows are triggered and they must be resilient to unexpected statuses of the infrastructure, which thankfully Terraform takes care of. Both down and upscaling are covered in the following flowchart: The autoscaling values are defined as configuration variables in the repo settings: It usually takes ~8 minutes for an EC2 mac2.metal instance to become reachable after creation, meaning that we can redeploy the entire infrastructure very quickly. Automated Connection to GitHub Actions We provide some user data when deploying the instances. resource "aws_instance" "bare_metal" { ami = data.aws_ami.bare_metal_gha_runner.id count = var.fleet_size ... user_data = <<EOF { "github_enterprise": "<GHE_ENTERPRISE_NAME>", "github_pat_secret_manager_arn": ${data.aws_secretsmanager_secret_version.ghe_pat.arn}, "github_url": "<GHE_ENTERPRISE_URL>", "runner_group": "CI-MobileTeams", "runner_name": "bare-metal-runner-${count.index + 1}" } EOF The user data is stored in a specific folder by macos-init and we implement a module to copy the content to ~/actions-runner-config.json. ### Group 10 ### [[Module]] Name = "Create actions-runner-config.json from userdata" PriorityGroup = 10 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["/bin/zsh", "-c", 'instanceId="$(curl http://169.254.169.254/latest/meta-data/instance-id)"; if [[ ! -z $instanceId ]]; then cp /usr/local/aws/ec2-macos-init/instances/$instanceId/userdata ~/actions-runner-config.json; fi'] RunAsUser = "ec2-user" which is in turn used by the configure_runner.sh script to configure the GitHub Actions runner. #!/bin/bash GITHUB_ENTERPRISE=$(cat $HOME/actions-runner-config.json | jq -r .github_enterprise) GITHUB_PAT_SECRET_MANAGER_ARN=$(cat $HOME/actions-runner-config.json | jq -r .github_pat_secret_manager_arn) GITHUB_PAT=$(aws secretsmanager get-secret-value --secret-id $GITHUB_PAT_SECRET_MANAGER_ARN | jq -r .SecretString) GITHUB_URL=$(cat $HOME/actions-runner-config.json | jq -r .github_url) RUNNER_GROUP=$(cat $HOME/actions-runner-config.json | jq -r .runner_group) RUNNER_NAME=$(cat $HOME/actions-runner-config.json | jq -r .runner_name) RUNNER_JOIN_TOKEN=` curl -L \ -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $GITHUB_PAT"\ $GITHUB_URL/api/v3/enterprises/$GITHUB_ENTERPRISE/actions/runners/registration-token | jq -r '.token'` MACOS_VERSION=`sw_vers -productVersion` XCODE_VERSIONS=`find /Applications -type d -name "Xcode-*" -maxdepth 1 \ -exec basename {} \; \ | tr '\n' ',' \ | sed 's/,$/\n/' \ | sed 's/.app//g'` $HOME/actions-runner/config.sh \ --unattended \ --url $GITHUB_URL/enterprises/$GITHUB_ENTERPRISE \ --token $RUNNER_JOIN_TOKEN \ --runnergroup $RUNNER_GROUP \ --labels ec2,bare-metal,$RUNNER_NAME,macOS-$MACOS_VERSION,$XCODE_VERSIONS \ --name $RUNNER_NAME \ --replace The above script is run by a macos-init module. ### Group 11 ### [[Module]] Name = "Configure the GHA runner" PriorityGroup = 11 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["/bin/zsh", "-c", "/Users/ec2-user/configure_runner.sh"] RunAsUser = "ec2-user" The GitHub documentation states that itâs possible to create a customized service starting from a provided template. It took some research and various attempts to find the right configuration that allows the connection without having to log in in the UI (over VNC) which would represent a blocker for a complete automation of the deployment. We believe that the single person who managed to get this right is SĂ©bastien Stormacq who provided the correct solution. The connection to GHA can be completed with 2 more modules that install the runner as a service and load the custom daemon. ### Group 12 ### [[Module]] Name = "Run the self-hosted runner application as a service" PriorityGroup = 12 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["/bin/zsh", "-c", "cd /Users/ec2-user/actions-runner && ./svc.sh install"] RunAsUser = "ec2-user" ### Group 13 ### [[Module]] Name = "Launch actions runner daemon" PriorityGroup = 13 RunPerInstance = true FatalOnError = false [Module.Command] Cmd = ["sudo", "/bin/launchctl", "load", "/Library/LaunchDaemons/com.justeattakeaway.actions-runner-service.plist"] RunAsUser = "ec2-user" Using a daemon instead of an agent (see Creating Launch Daemons and Agents), doesnât require us to set up any auto-login which on macOS is a bit of a tricky procedure and is best avoided also for security reasons. The following is the content of the daemon for completeness. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.justeattakeaway.actions-runner-service</string> <key>ProgramArguments</key> <array> <string>/Users/ec2-user/actions-runner/runsvc.sh</string> </array> <key>UserName</key> <string>ec2-user</string> <key>GroupName</key> <string>staff</string> <key>WorkingDirectory</key> <string>/Users/ec2-user/actions-runner</string> <key>RunAtLoad</key> <true/> <key>StandardOutPath</key> <string>/Users/ec2-user/Library/Logs/com.justeattakeaway.actions-runner-service/stdout.log</string> <key>StandardErrorPath</key> <string>/Users/ec2-user/Library/Logs/com.justeattakeaway.actions-runner-service/stderr.log</string> <key>EnvironmentVariables</key> <dict> <key>ACTIONS_RUNNER_SVC</key> <string>1</string> </dict> <key>ProcessType</key> <string>Interactive</string> <key>SessionCreate</key> <true/> </dict> </plist> Not long after the deployment, all the steps above are executed and we can appreciate the runners appearing as connected. Multi-Team Use We start the downscaling at 11:59 PM on Fridays and start the upscaling at 6:00 AM on Mondays. These times have been chosen in a way that guarantees a level of service to teams in the UK, the Netherlands (GMT+1) and Canada (Winnipeg is on GMT-6) accounting for BST (British Summer Time) and DST (Daylight Saving Time) too. Times are defined in UTC in the GHA workflow triggers and the local time of the runner is not taken into account. Since the instances are used to build multiple projects and tools owned by different teams, one problem we faced was that instances could get compromised if workflows included unsafe steps (e.g. modifications to global configurations). GitHub Actions has a documentation page about Hardening self-hosted runners specifically stating: Self-hosted runners for GitHub do not have guarantees around running in ephemeral clean virtual machines, and can be persistently compromised by untrusted code in a workflow. We try to combat such potential problems by educating people on how to craft workflows and rely on the quick redeployment of the stack should the instances break. We also run scripts before and after each job to ensure that instances can be reused as much as possible. This includes actions like deleting the simulatorsâ content, derived data, caches and archives. Centralized Management via GitHub Actions The macOS runners stack is defined in a dedicated macOS-runners repository. We implemented GHA workflows to cover the use cases that allow teams to self-serve: create macOS AMI deploy CI downscale for the weekend* upscale for the working week* release unused hosts* * run without inputs and on a scheduled trigger The runners running the jobs in this repo are small t2.micro Linux instances and come with the AWSCLI installed. An IAM instance role with the correct policies is used to make sure that aws ec2 commands allocate-hosts, describe-hosts and release-hosts could execute and we used jq to parse the API responses. A note on VM runners In this article, we discussed how weâve used bare metal instances as runners. We have spent a considerable amount of time investigating how we could leverage the Virtualization framework provided by Apple to create virtual machines via Tart. If youâve grasped the complexity of crafting a CI system of runners on bare metal instances, you can understand that introducing VMs makes the setup sensibly more convoluted which would be best discussed in a separate article. While a setup with Tart VMs has been implemented, we realised that itâs not performant enough to be put to use. Using VMs, the number of runners would double but we preferred to have native performance as the slowdown is over 40% compared to bare metal. Moreover, when it comes to running heavy UI test suites like ours, tests became too flaky. Testing the VMs, we also realised that the standard values of Throughput and IOPS on the EBS volume didnât seem to be enough and caused disk congestion resulting in an unacceptable slowdown in performance. Here is a quick summary of the setup and the challenges we have faced. Virtual runners require 2 images: one for the VMs (tart) and one for the host (AMI). We use Packer to create VM images (Vanilla, Base, IDE, Tools) with the software required based on the templates provided by Tart and we store the OCI-compliant images on ECR. We create these images on CI with dedicated workflows similar to the one described earlier for bare metal but, in this case, macOS runners (instead of Linux) are required as publishing to ECR is done with tart which runs on macOS. Extra policies are required on the instance role to allow the runner to push to ECR (using temporary_iam_instance_profile_policy_document in Packerâs Amazon EBS). Apple set a limit to the number of VMs that can be run on an instance to 2, which would allow to double the size of the fleet of runners. Creating AMIs hosting 2 VMs is done with Packer and steps include cloning the image from ECR and configuring macos-init modules to run daemons to run the VMs via Tart. Deploying a virtual CI infrastructure is identical to what has already been described for bare metal. Connecting to and interfacing with the VMs happens from within the host. Opening SSH and especially VNC sessions from within the bare metal instances can be very confusing. The version of macOS on the host and the one on the VMs could differ. The version used on the host must be provided with an AMI by AWS, while the version used on the VMs is provided by Apple in IPSW files (see ipsw.me). The GHA runners run on the VMs meaning that the host wonât require Xcode installed nor any other software used by the workflows. VMs donât allow for provisioning meaning we have to share configurations with the VMs via shared folders on the host with the â dir flag which causes extra setup complexity. VMs canât easily run the GHA runner as a service. The svc script requires the runner to be configured first, an operation that cannot be done during the provisioning of the host. We therefore need to implement an agent ourselves to configure and connect the runner in a single script. To have UI access (a-la VNC) to the VMs, itâs first required to stop the VMs and then run them without the --no-graphics flag. At the time of writing, copy-pasting wonât work even if using the --vnc or --vnc-experimental flags. Tartelet is a macOS app on top of Tart that allows to manage multiple GitHub Actions runners in ephemeral environments on a single host machine. We didnât consider it to avoid relying on too much third-party software and because it doesnât have yet GitHub Enterprise support. Worth noting that the Tart team worked on an orchestration solution named Orchard that seems to be in its initial stage. Conclusion In 2023 we have revamped and globalised our approach to CI. We have migrated from Jenkins to GitHub Actions as the CI/CD solution of choice for the whole group and have profoundly optimised and improved our pipelines introducing a greater level of job parallelisation. We have implemented a brand new scalable setup for bare metal macOS runners leveraging the HashiCorp tools Packer and Terraform. We have also implemented a setup based on Tart virtual machines. We have increased the size of our iOS team over the past few years, now including more than 40 developers, and still managed to be successful with only 5 bare metal instances on average, which is a clear statement of how performant and optimised our setup is. We have extended the capabilities of our Internal Developer Platform with a globalised approach to provide macOS runners; we feel that this setup will stand the test of time and serve well various teams across JET for years to come.

The idea of a Fastlane replacement
Prelude
Fastlane is widely used by iOS teams all around the world. It became the standard de facto to automate common tasks such as building apps, running tests, and uploading builds to App Store Connect. Fastlane has been recently moved under the Mobile Native Foundation which is amazing as Google
Prelude Fastlane is widely used by iOS teams all around the world. It became the standard de facto to automate common tasks such as building apps, running tests, and uploading builds to App Store Connect. Fastlane has been recently moved under the Mobile Native Foundation which is amazing as Google wasn't actively maintaining the project. At Just Eat Takeaway, we have implemented an extensive number of custom lanes to perform domain-specific tasks and used them from our CI. The major problem with Fastlane is that it's written in Ruby. When it was born, using Ruby was a sound choice but iOS developers are not necessarily familiar with such language which represents a barrier to contributing and writing lanes. While Fastlane.swift, a version of Fastlane in Swift, has been in beta for years, it's not a rewrite in Swift but rather a "solution on top" meaning that developers and CI systems still have to rely on Ruby, install related software (rbenv or rvm) and most likely maintain a Gemfile. The average iOS dev knows well that Ruby environments are a pain to deal with and have caused an infinite number of headaches. In recent years, Apple has introduced technologies that would enable a replacement of Fastlane using Swift: Swift Package Manager (SPM) Swift Argument Parser (SAP) Being myself a big fan of CLI tools written in Swift, I soon started maturing the idea of a Fastlane rewrite in Swift in early 2022. I circulated the idea with friends and colleagues for months and the sentiment was clear: it's time for a fresh simil-Fastlane tool written in Swift. Journey Towards the end of 2022, I was determined to start this project. I teamed up with 2 iOS devs (not working at Just Eat Takeaway) and we started working on a design. I was keen on calling this project "Swiftlane" but the preference seemed to be for the name "Interstellar" which was eventually shortened into "Stellar". Fastlane has the concept of Actions and I instinctively thought that in Swift-land, they could take the form of SPM packages. This would make Stellar a modular system with pluggable components. For example, consider the Scan action in Fastlane. It could be a package that solely solves the same problem around testing. My goal was not to implement the plethora of existing Fastlane actions but rather to create a system that allows plugging in any package building on macOS. A sound design of such system was crucial. The Stellar ecosystem I had in mind was composed of 4 parts: Actions Actions are the basic building blocks of the ecosystem. They are packages that define a library product. An action can do anything, from taking care of build tasks to integrating with GitHub. Actions are independent packages that have no knowledge of the Stellar system, which treats them as pluggable components to create higher abstractions. Ideally, actions should expose an executable product (the CLI tool) using SAP calling into the action code. This is not required by Stellar but itâs advisable as a best practice. Official Actions would be hosted in the Stellar organisation on GitHub. Custom Actions could be created using Stellar. Tasks Tasks are specific to a project and implemented by the project developers. They are SAP ParsableCommand or AsyncParsableCommand which use actions to construct complex logic specific to the needs of the project. Executor Executor is a command line tool in the form of a package generated by Stellar. Itâs the entry point to the user-defined tasks. Invoking tasks on the Executor is like invoking lanes in Fastlane. Both developers and CI would interface with the Executor (masked as Stellar) to perform all operations. E.g. stellar setup_environment --developer-mode stellar run_unit_tests module=OrderHistory stellar setup_demo_app module=OrderHistory stellar run_ui_tests module=OrderHistory device="iPhone 15 Pro" Stellar CLI Stellar CLI is a command line tool that takes care of the heavy lifting of dealing with the Executor and the Tasks. It allows the integration of Stellar in a project and it should expose the following main commands: init: initialise the project by creating an Exectutor package in the .stellar folder build: builds the Executor generating a binary that is shared with the team members and used by CI create-action: scaffolding to create a new action in the form of a package create-task: scaffolding to create a new task in the form of a package edit: opens the Executor package for editing, similar to tuist edit This design was presented to a restricted group of devs at Just Eat Takeaway and it didn't take long to get an agreement on it. It was clear that once Stellar was completed, we would have integrated it in the codebase. Wider design I believe that a combination of CLI tools can create complex, templateable and customizable stacks to support the creation and growth of iOS codebases. Based on the experience developed at JET working on a large modular project with lots of packages, helper tools and optimised CI pipelines, I wanted Stellar to be eventually part of a set of tools taking the name âStellar Toolsâ that could enable the creation and the management of large codebases. Something like the following: Tuist: generates projects and workspaces programmatically PackageGenerator: generates packages using a DSL Stacker: creates a modular iOS project based on a DSL Stellar: automate tasks Workflows: generates GitHub Actions workflows that use Stellar From my old notes: Current state After a few months of development within this team (made of devs not working at Just Eat Takeaway), I realised things were not moving in the direction I desired and I decided it was not beneficial to continue the collaboration with the team. We stopped working on Stellar mainly due to different levels of commitment from each of us and focus on the wrong tasks signalling a lack of project management from my end. For example, a considerable amount of time and effort went into the implementation of a version management system (vastly inspired by the one used in Tuist) that was not part of the scope of the Stellar project. The experience left me bitter and demotivated, learning that sometimes projects are best started alone. We made the repo public on GitHub aware that it was far from being production-ready but in my opinion, it's no doubt a nice, inspiring, MVP. GitHub - StellarTools/Stellar Contribute to StellarTools/Stellar development by creating an account on GitHub. GitHubStellarTools GitHub - StellarTools/ActionDSL Contribute to StellarTools/ActionDSL development by creating an account on GitHub. GitHubStellarTools The intent was then to progress on my own or with my colleagues at JET. As things evolved in 2023, we embarked on big projects that continued to evolve the platform such as a massive migration to GitHub Actions. To this day, we still plan to remove Fastlane as our vision is to rely on external dependencies as little as possible but there is no plan to use Stellar as-is. I suspect that, for the infrastructure team at JET, things will evolve in a way that sees more CLI tools being implemented and more GitHub actions using them.
CloudWatch dashboards and alarms on Mac instances
CloudWatch is great for observing and monitoring resources and applications on AWS, on premises, and on other clouds.
While it's trivial to have the agent running on Linux, it's a bit more involved for mac instances (which are commonly used as CI workers). The support was
CloudWatch is great for observing and monitoring resources and applications on AWS, on premises, and on other clouds. While it's trivial to have the agent running on Linux, it's a bit more involved for mac instances (which are commonly used as CI workers). The support was announced in January 2021 for mac1.metal (Intel/x86_64) and I bumped into some challenges on mac2.metal (M1/ARM64) that the team at AWS helped me solve (see this issue on the GitHub repo). I couldn't find other articles nor precise documentation from AWS which is why I'm writing this article to walk you through a common CloudWatch setup. The given code samples are for the HashiCorp tools Packer and Terraform and focus on mac2.metal instances. I'll cover the following steps: install the CloudWatch agent on mac2.metal instances configure the CloudWatch agent create a CloudWatch dashboard setup CloudWatch alarms setup IAM permissions Install the CloudWatch agent The CloudWatch agent can be installed by downloading the pkg file listed on this page and running the installer. You probably want to bake the agent into your AMI, so here is the Packer code for mac2.metal (ARM): # Install wget via brew provisioner "shell" { inline = [ "source ~/.zshrc", "brew install wget" ] } # Install CloudWatch agent provisioner "shell" { inline = [ "source ~/.zshrc", "wget https://s3.amazonaws.com/amazoncloudwatch-agent/darwin/arm64/latest/amazon-cloudwatch-agent.pkg", "sudo installer -pkg ./amazon-cloudwatch-agent.pkg -target /" ] } For the agent to work, you'll need collectd (https://collectd.org/) to be installed on the machine, which is usually done via brew. Brew installs it at /opt/homebrew/sbin/. This is also a step you want to perform when creating your AMI. # Install collectd via brew provisioner "shell" { inline = [ "source ~/.zshrc", "brew install collectd" ] } Configure the CloudWatch agent In order to run, the agent needs a configuration which can be created using the wizard. This page defines the metric sets that are available. Running the wizard with the command below will allow you to generate a basic json configuration which you can modify later. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard The following is a working configuration for Mac instances so you can skip the process. { "agent": { "metrics_collection_interval": 60, "run_as_user": "root" }, "metrics": { "aggregation_dimensions": [ [ "InstanceId" ] ], "append_dimensions": { "AutoScalingGroupName": "${aws:AutoScalingGroupName}", "ImageId": "${aws:ImageId}", "InstanceId": "${aws:InstanceId}", "InstanceType": "${aws:InstanceType}" }, "metrics_collected": { "collectd": { "collectd_typesdb": [ "/opt/homebrew/opt/collectd/share/collectd/types.db" ], "metrics_aggregation_interval": 60 }, "cpu": { "measurement": [ "cpu_usage_idle", "cpu_usage_iowait", "cpu_usage_user", "cpu_usage_system" ], "metrics_collection_interval": 60, "resources": [ "*" ], "totalcpu": false }, "disk": { "measurement": [ "used_percent", "inodes_free" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, "diskio": { "measurement": [ "io_time", "write_bytes", "read_bytes", "writes", "reads" ], "metrics_collection_interval": 60, "resources": [ "*" ] }, "mem": { "measurement": [ "mem_used_percent" ], "metrics_collection_interval": 60 }, "netstat": { "measurement": [ "tcp_established", "tcp_time_wait" ], "metrics_collection_interval": 60 }, "statsd": { "metrics_aggregation_interval": 60, "metrics_collection_interval": 10, "service_address": ":8125" }, "swap": { "measurement": [ "swap_used_percent" ], "metrics_collection_interval": 60 } } } } I have enhanced the output of the wizard with some reasonable metrics to collect. The configuration created by the wizard is almost working but it's lacking a fundamental config to make it work out-of-the-box: the collectd_typesdb value. Linux and Mac differ when it comes to the location of collectd and types.db, and the agent defaults to the Linux path even if it was built for Mac, causing the following error when trying to run the agent: ======== Error Log ======== 2023-07-23T04:57:28Z E! [telegraf] Error running agent: Error loading config file /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml: error parsing socket_listener, open /usr/share/collectd/types.db: no such file or directory Moreover, the /usr/share/ folder is not writable unless you disable SIP (System Integrity Protection) which cannot be done on EC2 mac instances nor is something you want to do for security reasons. The final configuration is something you want to save in System Manager Parameter Store using the ssm_parameter resource in Terraform: resource "aws_ssm_parameter" "cw_agent_config_darwin" { name = "/cloudwatch-agent/config/darwin" description = "CloudWatch agent config for mac instances" type = "String" value = file("./cw-agent-config-darwin.json") } and use it when running the agent in a provisioning step: resource "null_resource" "run_cloudwatch_agent" { depends_on = [ aws_instance.mac_instance ] connection { type = "ssh" agent = false host = aws_instance.mac_instance.private_ip user = "ec2-user" private_key = tls_private_key.mac_instance.private_key_pem timeout = "30m" } # Run CloudWatch agent provisioner "remote-exec" { inline = [ "sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:${aws_ssm_parameter.cw_agent_config_darwin.name}" ] } } Create a CloudWatch dashboard Once the instances are deployed and running, they will send events to CloudWatch and we can create a dashboard to visualise them. You can create a dashboard manually in the console and once you are happy with it, you can just copy the source code, store it in a file and feed it to Terraform. Here is mine that could probably work for you too if you tag your instances with the Type set to macOS: { "widgets": [ { "height": 15, "width": 24, "y": 0, "x": 0, "type": "explorer", "properties": { "metrics": [ { "metricName": "cpu_usage_user", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "cpu_usage_system", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "disk_used_percent", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "diskio_read_bytes", "resourceType": "AWS::EC2::Instance", "stat": "Average" }, { "metricName": "diskio_write_bytes", "resourceType": "AWS::EC2::Instance", "stat": "Average" } ], "aggregateBy": { "key": "", "func": "" }, "labels": [ { "key": "Type", "value": "macOS" } ], "widgetOptions": { "legend": { "position": "bottom" }, "view": "timeSeries", "stacked": false, "rowsPerPage": 50, "widgetsPerRow": 1 }, "period": 60, "splitBy": "", "region": "eu-west-1" } } ] } You can then use the cloudwatch_dashboard resource in Terraform: resource "aws_cloudwatch_dashboard" "mac_instances" { dashboard_name = "mac-instances" dashboard_body = file("./cw-dashboard-mac-instances.json") } It will show something like this: Setup CloudWatch alarms Once the dashboard is up, you should set up alarms so that you are notified of any anomalies, rather than actively monitoring the dashboard for them. What works for me is having alarms triggered via email when the used disk space is going above a certain level (say 80%). We can use the cloudwatch_metric_alarm resource. resource "aws_cloudwatch_metric_alarm" "disk_usage" { alarm_name = "mac-${aws_instance.mac_instance.id}-disk-usage" comparison_operator = "GreaterThanThreshold" evaluation_periods = 30 metric_name = "disk_used_percent" namespace = "CWAgent" period = 120 statistic = "Average" threshold = 80 alarm_actions = [aws_sns_topic.disk_usage.arn] dimensions = { InstanceId = aws_instance.mac_instance.id } } We can then create an SNS topic and subscribe all interested parties to it. This will allow us to broadcast to all subscribers when the alarm is triggered. For this, we can use the sns_topic and sns_topic_subscription resources. resource "aws_sns_topic" "disk_usage" { name = "CW_Alarm_disk_usage_mac_${aws_instance.mac_instance.id}" } resource "aws_sns_topic_subscription" "disk_usage" { for_each = toset(var.alarm_subscriber_emails) topic_arn = aws_sns_topic.disk_usage.arn protocol = "email" endpoint = each.value } variable "alarm_subscriber_emails" { type = list(string) } If you are deploying your infrastructure via GitHub Actions, you can set your subscribers as a workflow input or as an environment variable. Here is how you should pass a list of strings via a variable in Terraform: name: Deploy Mac instance env: ALARM_SUBSCRIBERS: '["user1@example.com","user2@example.com"]' AMI: ... jobs: deploy: ... steps: - name: Terraform apply run: | terraform apply \ --var ami=${{ env.AMI }} \ --var alarm_subscriber_emails='${{ env.ALARM_SUBSCRIBERS }}' \ --auto-approve Setup IAM permissions The instance that performs the deployment requires permissions for CloudWatch, System Manager, and SNS. The following is a policy that is enough to perform both terraform apply and terraform destroy. Please consider restricting to specific resources. { "Version": "2012-10-17", "Statement": [ { "Sid": "CloudWatchDashboardsPermissions", "Effect": "Allow", "Action": [ "cloudwatch:DeleteDashboards", "cloudwatch:GetDashboard", "cloudwatch:ListDashboards", "cloudwatch:PutDashboard" ], "Resource": "*" }, { "Sid": "CloudWatchAlertsPermissions", "Effect": "Allow", "Action": [ "cloudwatch:DescribeAlarms", "cloudwatch:DescribeAlarmsForMetric", "cloudwatch:DescribeAlarmHistory", "cloudwatch:DeleteAlarms", "cloudwatch:DisableAlarmActions", "cloudwatch:EnableAlarmActions", "cloudwatch:ListTagsForResource", "cloudwatch:PutMetricAlarm", "cloudwatch:PutCompositeAlarm", "cloudwatch:SetAlarmState" ], "Resource": "*" }, { "Sid": "SystemsManagerPermissions", "Effect": "Allow", "Action": [ "ssm:GetParameter", "ssm:GetParameters", "ssm:ListTagsForResource", "ssm:DeleteParameter", "ssm:DescribeParameters", "ssm:PutParameter" ], "Resource": "*" }, { "Sid": "SNSPermissions", "Effect": "Allow", "Action": [ "sns:CreateTopic", "sns:DeleteTopic", "sns:GetTopicAttributes", "sns:GetSubscriptionAttributes", "sns:ListSubscriptions", "sns:ListSubscriptionsByTopic", "sns:ListTopics", "sns:SetSubscriptionAttributes", "sns:SetTopicAttributes", "sns:Subscribe", "sns:Unsubscribe" ], "Resource": "*" } ] } On the other hand, to send logs to CloudWatch, the Mac instances require permissions given by the CloudWatchAgentServerPolicy: resource "aws_iam_role_policy_attachment" "mac_instance_iam_role_cw_policy_attachment" { role = aws_iam_role.mac_instance_iam_role.name policy_arn = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy" } Conclusion You have now defined your CloudWatch dashboard and alarms using "Infrastructure as Code" via Packer and Terraform. I've covered the common use case of instances running out of space on disk which is useful to catch before CI starts becoming unresponsive slowing your team down.
Easy connection to AWS Mac instances with EC2macConnector
Overview
Amazon Web Services (AWS) provides EC2 Mac instances commonly used as CI workers. Configuring them can be either a manual or an automated process, depending on the DevOps and Platform Engineering experience in your company. No matter what process you adopt, it is sometimes useful to log into the
Overview Amazon Web Services (AWS) provides EC2 Mac instances commonly used as CI workers. Configuring them can be either a manual or an automated process, depending on the DevOps and Platform Engineering experience in your company. No matter what process you adopt, it is sometimes useful to log into the instances to investigate problems. EC2macConnector is a CLI tool written in Swift that simplifies the process of connecting over SSH and VNC for DevOps engineers, removing the need of updating private keys and maintaining the list of IPs that change across deployment cycles. Connecting to EC2 Mac instances without EC2macConnector AWS documentation describes the steps needed to allow connecting via VNC: Start the Apple Remote Desktop agent and enable remote desktop access on the instance Set the password for the ec2-user user on the instance to allow connecting over VNC Start an SSH session Connect over VNC Assuming steps 1 and 2 and done, steps 3 and 4 are usually manual and repetitive: the private keys and IPs usually change across deployments which could happen frequently, even daily. Here is how to start an SSH session in the terminal binding a port locally: ssh ec2-user@<instance_IP> \ -L <local_port>:localhost:5900 \ -i <path_to_privae_key> \ To connect over VNC you can type the following in Finder â Go â Connect to Server (â + K) and click Connect: vnc://ec2-user@localhost:<local_port> or you could create a .vncloc file with the following content and simply open it: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "<http://www.apple.com/DTDs/PropertyList-1.0.dtd>"> <plist version="1.0"> <dict> <key>URL</key> <string>vnc://ec2-user@localhost:<local_port></string> </dict> </plist> If you are a system administrator, you might consider EC2 Instance Connect, but sadly, in my experience, it's not a working option for EC2 Mac instances even though I couldn't find evidence confirming or denying this statement. Administrators could also consider using Apple Remote Desktop which comes with a price tag of $/ÂŁ79.99. Connecting to EC2 Mac instances with EC2macConnector EC2macConnector is a simple and free tool that works in 2 steps: the configure command fetches the private keys and the IP addresses of the running EC2 Mac instances in a given region, and creates files using the said information to connect over SSH and VNC: ec2macConnector configure \ --region <aws_region> \ --secrets-prefix <mac_metal_private_keys_prefix> Read below or the README for more information on the secrets prefix value. the connect command connects to the instances via SSH or VNC. ec2macConnector connect --region <aws_region> <fleet_index> ec2macConnector connect --region <aws_region> <fleet_index> --vnc đĄ Connecting over VNC requires an SSH session to be established first. As with any tool written using ArgumentParser, use the --help flag to get more information. Requirements There are some requirements to respect for the tool to work: Permissions EC2macConnector requires AWS credentials either set as environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) or configured in ~/.aws/credentials via the AWS CLI. Environment variables take precedence. The user must be granted the following permissions: ec2:DescribeInstances secretsmanager:ListSecrets secretsmanager:GetSecretValue EC2 instances The EC2 Mac instances must have the EC2macConnector:FleetIndex tag set to the index of the instance in the fleet. Indexes should start at 1. Instances that don't have the said tag will be ignored. Secrets and key pairs formats EC2macConnector assumes that the private key for each instance key pair is stored in SecretsManagers. The name of the key pair could and should be different from the secret ID. For example, the instance key pair should include an incremental number also part of the corresponding secret ID. Consider that the number of Mac instances in an AWS account is limited and it's convenient to refer to them using an index starting at 1. It's good practice for the secret ID to also include a nonce as secrets with the same name cannot be recreated before the deletion period has elapsed, allowing frequent provisioning-decommissioning cycles. For the above reasons, EC2macConnector assumes the following formats are used: instance key pairs: <keypair_prefix>_<index_of_instance_in_fleet> e.g. mac_instance_key_pair_5 secret IDs: <secrets_prefix>_<index_of_instance_in_fleet>_<nonce> e.g. private_key_mac_metal_5_dx9Wna73B EC2macConnector Under the hood The configure command: downloads the private keys in the ~/.ssh folder creates scripts to connect over SSH in ~/.ec2macConnector/<region>/scripts creates vncloc files to connect over VNC in ~/.ec2macConnector/<region>/vnclocs â .ec2macConnector tree ~/.ssh /Users/alberto/.ssh âââ mac_metal_1_i-08e4ffd8e9xxxxxxx âââ mac_metal_2_i-07bfff1f52xxxxxxx âââ mac_metal_3_i-020d680610xxxxxxx âââ mac_metal_4_i-08516ac980xxxxxxx âââ mac_metal_5_i-032bedaabexxxxxxx âââ config âââ known_hosts âââ ... The connect command: runs the scripts (opens new shells in Terminal and connects to the instances over SSH) opens the vncloc files â .ec2macConnector tree ~/.ec2macConnector /Users/alberto/.ec2macConnector âââ us-east-1 âââ scripts â âââ connect_1.sh â âââ connect_2.sh â âââ connect_3.sh â âââ connect_4.sh â âââ connect_5.sh âââ vnclocs âââ connect_1.vncloc âââ connect_2.vncloc âââ connect_3.vncloc âââ connect_4.vncloc âââ connect_5.vncloc

Toggles: the easiest feature flagging in Swift
I previously wrote about JustTweak here. It's the feature flagging mechanism we've been using at Just Eat Takeaway.com to power the iOS consumer apps since 2017. It's proved to be very stable and powerful and it has evolved over time. Friends have heard
I previously wrote about JustTweak here. It's the feature flagging mechanism we've been using at Just Eat Takeaway.com to power the iOS consumer apps since 2017. It's proved to be very stable and powerful and it has evolved over time. Friends have heard me promoting it vehemently and some have integrated it with success and appreciation. I don't think I've promoted it in the community enough (it definitely deserved more) but marketing has never been my thing. Anyway, JustTweak grew old and some changes were debatable and not to my taste. I have then decided to use the knowledge of years of working on the feature flagging matter to give this project a new life by rewriting it from scratch as a personal project. And here it is: Toggles. I never tweeted about this side project of mine đ It's like JustTweak (feature flagging), but sensibly better. https://t.co/bdGWuUyQEU #Swift #iOS #macOS â Alberto De Bortoli (@albertodebo) March 23, 2023 Think of JustTweak, but better. A lot better. Frankly, I couldn't have written it better. Here are the main highlights: brand new code, obsessively optimized and kept as short and simple as possible extreme performances fully tested fully documented performant UI debug view in SwiftUI standard providers provided demo app provided ability to listen for value changes (using Combine) simpler APIs ToggleGen CLI, to allow code generation ToggleCipher CLI, to allow encoding/decoding of secrets JustTweakMigrator CLI, to allow a smooth transition from JustTweak Read all about it on the repo's README and on the DocC page. It's on Swift Package Index too. Toggles â Swift Package Index Toggles by TogglesPlatform on the Swift Package Index â Toggles is an elegant and powerful solution to feature flagging for Apple platforms. Learn more There are plans (or at least the desire!) to write a backend with Andrea Scuderi. That'd be really nice! @albertodebo This wasn't planned! It looks like we need to build the backend for #Toggles with #Breeze! pic.twitter.com/OxNovRl70L â andreascuderi (@andreascuderi13) March 26, 2023
The Continuous Integration system used by the mobile teams
- iOS
- Continuous Integration
- Jenkins
- DevOps
In this article, weâll discuss the way our mobile teams have evolved the Continuous Integration (CI) stack over the recent years.
Originally published on the Just Eat Takeaway Engineering Blog. Overview In this article, weâll discuss the way our mobile teams have evolved the Continuous Integration (CI) stack over the recent years. We donât have DevOps engineers in our team and, until recently, we had adopted a singular approach in which CI belongs to the whole team and everyone should be able to maintain it. This has proven to be difficult and extremely time-consuming. The Just Eat side of our newly merged entity has a dedicated team providing continuous integration and deployment tools to their teams but they are heavily backend-centric and there has been little interest in implementing solutions tailored for mobile teams. As is often the case in tech companies, there is a missing link between mobile and DevOps teams. The iOS team is the author and first consumer of the solution described but, as you can see, we have ported the same stack to Android as well. We will mainly focus on the iOS implementation in this article, with references to Android as appropriate. 2016â2020 Historically speaking, the iOS UK app was running on Bitrise because it was decided not to invest time in implementing a CI solution, while the Bristol team was using a Jenkins version installed by a different team. This required manual configuration with custom scripts and it had custom in-house hardware. These are two quite different approaches indeed and, at this stage, things were not great but somehow good enough. Itâs fair to say we were still young on the DevOps front. When we merged the teams, it was clear that we wanted to unify the CI solution and the obvious choice for a company of our size was to not use a third-party service, bringing us to invest more and more in Jenkins. Only one team member had good knowledge of Jenkins but the rest of the team showed little interest in learning how to configure and maintain it, causing the stack to eventually become a dumping ground of poorly configured jobs. It was during this time that we introduced Fastlane (making the common tasks portable), migrated the UK app from Bitrise to Jenkins, started running the UI tests on Pull Requests, and other small yet sensible improvements. 2020â2021 Starting in mid-2020 the iOS team has significantly revamped its CI stack and given it new life. The main goals we wanted to achieve (and did by early 2021) were: Revisit the pipelines Clear Jenkins configuration and deployment strategy Make use of AWS Mac instances Reduce the pool size of our mac hardware Share our knowledge across teams better Since the start of the pandemic, we have implemented the pipelines in code (bidding farewell to custom bash scripts), we moved to a monorepo which was a massive step ahead and began using SonarQube even more aggressively. We added Slack reporting and PR Assigner, an internal tool implemented by Andrea Antonioni. We also automated the common release tasks such as cutting and completing a release and uploading the dSYMS to Firebase. We surely invested a lot in optimizing various aspects such as running the UI tests in parallel, making use of shallow repo cloning, We also moved to not checking in the pods within the repo. This, eventually, allowed us to reduce the number of agents for easier infrastructure maintenance. Automating the infrastructure deployment of Jenkins was a fundamental shift compared to the previous setup and we have introduced AWS Mac instances replacing part of the fleet of our in-house hardware. CI system setup Letâs take a look at our stack. Before we start, weâd like to thank Isham Araia for having provided a proof of concept for the configuration and deployment of Jenkins. He talked about it at https://ish-ar.io/jenkins-at-scale/ and it represented a fundamental starting point, saving us several days of researching. Triggering flow Starting from the left, we have our repositories (plural, as some shared dependencies donât live in the monorepo). The repositories contain the pipelines in the form of Jenkinsfiles and they call into Fastlane lanes. Pretty much every action is a lane, from running the tests to archiving for the App Store to creating the release branches. Changes are raised through pull requests that trigger Jenkins. There are other ways to trigger Jenkins: by user interaction (for things such as completing a release or archiving and uploading the app to App Store Connect) and cron triggers (for things such as building the nightly build, running the tests on the develop branch every 12 hours, or uploading the PACT contract to the broker). Once Jenkins has received the information, it will then schedule the jobs to one of the agents in our pool, which is now made up of 5 agents, 2 in the cloud and 3 in-house mac pros. Reporting flow Now that weâve talked about the first part of the flow, letâs talk about the flow of information coming back at us. Every PR triggers PR Assigner, a tool that works out a list of reviewers to assign to pull requests and notifies engineers via dedicated Slack channels. The pipelines post on Slack, providing info about all the jobs that are being executed so we can read the history without having to log into Jenkins. We have in place the standard notification flow from Jenkins to GitHub to set the status checks and Jenkins also notifies SonarQube to verify that any change meets the quality standards (namely code coverage percentage and coding rules). We also have a smart lambda named SonarQubeStatusProcessor that reports to GitHub, written by Alan Nichols. This is due to a current limitation of SonarQube, which only allows reporting the status of one SQ project to one GitHub repo. Since we have a monorepo structure we had to come up with this neat customization to report the SQ status for all the modules that have changed as part of the PR. Configuration Letâs see what the new interesting parts of Jenkins are. Other than Jenkins itself and several plugins, itâs important to point out JCasC and Job DSL. JCasC stands for Jenkins Configuration as Code, and it allows you to configure Jenkins via a yaml file. The point here is that nobody should ever touch the Jenkins settings directly from the configuration page, in the same way, one ideally shouldnât apply configuration changes manually in any dashboard. The CasC file is where we define the Slack integration, the user roles, SSO configuration, the number of agents and so on. We could also define the jobs in CasC but we go a step further than that. We use the Job DSL plugin that allows you to configure the jobs in groovy and in much more detail. One job we configure in the CasC file though is the seed job. This is a simple freestyle job that will go pick the jobs defined with Job DSL and create them in Jenkins. Deployment Letâs now discuss how we can get a configured Jenkins instance on EC2. In other words, how do we deploy Jenkins? We use a combination of tools that are bread and butter for DevOps people. The commands on the left spawn a Docker container that calls into the tools on the right. We start with Packer which allows us to create the AMI (Amazon Machine Image) together with Ansible, allowing us to configure an environment easily (much more easily than Chef or Puppet). Running the create-image command the script will: 1. Create a temporary EC2 instance 2. Connect to the instance and execute an ansible playbook Our playbook encompasses a number of steps, hereâs a summary: install the Jenkins version for the given Linux distribution install Nginx copy the SSL cert over configure nginx w/ SSL termination and reverse proxy install the plugins for Jenkins Once the playbook is executed, Packer will export an AMI in EC2 with all of this in it and destroy the instance that was used. With the AMI ready, we can now proceed to deploy our Jenkins. For the actual deployment, we use Terraform which allows us to define our infrastructure in code. The deploy command runs Terraform under the hood to set up the infrastructure, hereâs a summary of the task: create an IAM Role + IAM Policy configure security groups create the VPC and subnet to use with a specific CIDER block and the subnet create any private key pair to connect over SSH deploy the instance using a static private IP (it has to be static otherwise the A record in Route53 would break) copy the JCasC configuration file over so that when Jenkins starts it picks that up to configure itself The destroy command runs a âterraform destroyâ and destroys everything that was created with the deploy command. Deploy and destroy balance each other out. Now that we have Jenkins up and running, we need to give it some credentials so our pipelines are able to work properly. A neat way of doing this is by having the secrets (SSH keys, Firebase tokens, App Store Connect API Key and so forth) in AWS Secrets Manager which is based on KMS and use a Jenkins plugin to allow Jenkins to access them. Itâs important to note that developers donât have to install Packer, Ansible, Terraform or even the AWS CLI locally because the commands run a Docker container that does the real work with all the tools installed. As a result, the only thing one should have installed is really Docker. CI agents Enough said about Jenkins, itâs time to talk about the agents.As you probably already know, in order to run tests, compile and archive iOS apps we need Xcode, which is only available on macOS, so Linux or Windows instances are not going to cut it. We experimented with the recently introduced AWS Mac instances and they are great, ready out-of-the-box with minimal configuration on our end. What we were hoping to get to with this recent work was the ability to leverage the Jenkins Cloud agents. That would have been awesome because it would have allowed us to: let Jenkins manage the agent instances scale the agent pool according to the load on CI Sadly we couldn't go that far. Limitations are: the bootstrapping of a mac1.metal takes around 15 minutes reusing the dedicated host after having stopped an instance can take up to 3 hoursâââduring that time we just pay for a host that is not usable When you stop or terminate a Mac instance, Amazon EC2 performs a scrubbing workflow on the underlying Dedicated Host to erase the internal SSD, to clear the persistent NVRAM variables, and if needed, to update the bridgeOS software on the underlying Mac mini. This ensures that Mac instances provide the same security and data privacy as other EC2 Nitro instances. It also enables you to run the latest macOS AMIs without manually updating the bridgeOS software. During the scrubbing workflow, the Dedicated Host temporarily enters the pending state. If the bridgeOS software does not need to be updated, the scrubbing workflow takes up to 50 minutes to complete. If the bridgeOS software needs to be updated, the scrubbing workflow can take up to 3 hours to complete. Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html In other words: scaling mac instances is not an option and leaving the instances up 24/7 seems to be the easiest option. This is especially valid if your team is distributed and jobs could potentially run over the weekend as well, saving you the hassle of implementing downscaling ahead of the weekend. There are some pricing and instance allocation considerations to make. Note that On-Demand Mac1 Dedicated Hosts have a minimum host allocation and billing duration of 24 hours. âYou can purchase Savings Plans to lower your spend on Dedicated Hosts. Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region.â Source: https://aws.amazon.com/ec2/dedicated-hosts/pricing/ The On-Demand rate is $1.207 per hour. Iâd like to stress that no CI solution comes for free. Iâve often heard developers indicating that Travis and similar products are cheaper. The truth is that the comparison is not even remotely reasonable: virtual boxes are incredibly slow compared to native Apple hardware and take ridiculous bootstrapping times. Even the smallest projects suffer terribly. One might ask if itâs at least possible to use the same configuration process we used for the Jenkins instance (with Packer and Ansible) but sadly we hit additional limitations: Apple requires 2FA for downloading Xcode via xcode-version Apple requires 2FA for signing into Xcode The above pretty much causes the configuration flow to fall apart making it impossible to configure an instance via Ansible. Cloud agents for Android It was a different story for Android, in which we could easily configure the agent instance with Ansible and therefore leverage the Cloud configuration to allow automatic agent provisioning. This configuration is defined via CasC as everything else. To better control EC2 usage and costs, a few settings come in handy: minimum number of instances (up at all times) minimum number of spare instances (created to accommodate future jobs) instance cap: the maximum number of instances that can be provisioned at the same time idle termination time: how long agents should be kept alive after they have completed the job All of the above allow for proper scaling and a lot less maintenance compared to the iOS setup. A simple setup with 0 instances up at all times allows saving costs overnight and given that in our case the bootstrapping takes only 2 minutes, we can rely on the idle time setting. Conclusions Setting up an in-house CI is never a straightforward process and it requires several weeks of dedicated work. After years of waiting, Apple has announced Xcode Cloud which we believe will drastically change the landscape of continuous integration on iOS. The solution will most likely cause havoc for companies such as Bitrise and CircleCI and itâs reasonable to assume the pricing will be competitive compared to AWS, maybe running on custom hardware that only Apple is able to produce. A shift this big will take time to occur, so we foresee our solution to stay in use for quite some time. We hope to have inspired you on how a possible setup for mobile teams could be and informed you on what are the pros & cons of using EC2 mac instances.

iOS Monorepo & CI Pipelines
- iOS
- Monorepo
- Continuous Integration
- Jenkins
- Cocoapods
We have presented our modular iOS architecture in a previous article and I gave a talk at Swift Heroes 2020 about it. In this article, weâll analyse the challenges we faced to have the modular architecture integrated with our CI pipelines and the reasoning behind migrating to a monorepo.
Originally published on the Just Eat Takeaway Engineering Blog. We have presented our modular iOS architecture in a previous article and I gave a talk at Swift Heroes 2020 about it. In this article, weâll analyse the challenges we faced to have the modular architecture integrated with our CI pipelines and the reasoning behind migrating to a monorepo. The Problem Having several modules in separate repositories brings forward 2 main problems: Each module is versioned independently from the consuming app Each change involves at least 2 pull requests: 1 for the module and 1 for the integration in the app While the above was acceptable in a world where we had 2 different codebases, it soon became unnecessarily convoluted after we migrated to a new, global codebase. New module versions are implemented with the ultimate goal of being adopted by the only global codebase in use, making us realise we could simplify the change process. The monorepo approach has been discussed at length by the community for a few years now. Many talking points have come out of these conversations, even leading to an interesting story as told by Uber. In short, it entails putting all the code owned by the team in a single repository, precisely solving the 2 problems stated above. Monorepo structure The main advantage of a monorepo is a streamlined PR process that doesnât require us to raise multiple PRs, de facto reducing the number of pull requests to one. It also simplifies the versioning, allowing module and app code (ultimately shipped together) to be aligned using the same versioning. The first step towards a monorepo was to move the content of the repositories of the modules to the main app repo (weâll call it âmonorepoâ from now on). Since we rely on CocoaPods, the modules would be consumed as development pods. Hereâs a brief summary of the steps used to migrate a module to the monorepo: Inform the relevant teams about the upcoming migration Make sure there are no open PRs in the module repo Make the repository read-only and archive it Copy the module to the Modules folder of the monorepo (itâs possible to merge 2 repositories to keep the history but we felt we wanted to keep the process simple, the old history is still available in the old repo anyway) Delete the module .git folder (or it would cause a git submodule) Remove Gemfile and Gemfile.lock fastlane folder, .gitignore file, sonar-project.properties, .swiftlint.yml so to use those in the monorepo Update the monorepoâs CODEOWNERS file with the module codeowners Remove the .github folder Modify the app Podfile to point to the module as a dev pod and install it Make sure all the modulesâ demo apps in the monorepo refer to the new module as a dev pod (if they depend on it at all). The same applies to the module under migration. Delete the CI jobs related to the module Leave the podspecs in the private Specs repo (might be needed to build old versions of the app) The above assumes that CI is configured in a way that preserves the same integration steps upon a module change. Weâll discuss them later in this article. Not all the modules could be migrated to the monorepo, due to the fact the second-level dependencies need to live in separate repositories in order to be referenced in the podspec of a development pod. If not done correctly, CocoaPods will not be able to install them. We considered moving these dependencies to the monorepo whilst maintaining separate versioning, however, the main problem with this approach is that the version tags might conflict with the ones of the app. Even though CocoaPods supports tags that donât respect semantic versioning (for example prepending the tag with the name of the module), violating it just didnât feel right. EDIT: weâve learned that itâs possible to move such dependencies to the monorepo. This is done not by defining :path=> in the podspecs but instead by doing so in the Podfile of the main app, which is all Cocoapods needs to work out the location of the dependency on disk. Swift Package Manager considerations We investigated the possibility of migrating from CocoaPods to Appleâs Swift Package Manager. Unfortunately, when it comes to handling the equivalent of development pods, Swift Package Manager really falls down for us. It turns out that Swift Package Manager only supports one package per repo, which is frustrating because the process of working with editable packages is surprisingly powerful and transparent. Version pinning rules While development pods donât need to be versioned, other modules still need to. This is either because of their open-source nature or because they are second-level dependencies (referenced in other modulesâ podspecs). Hereâs a revised overview of the current modular architecture in 2021. We categorised our pods to better clarify what rules should apply when it comes to version pinning both in the Podfiles and in the podspecs. Open-Source pods Our open-source repositories on github.com/justeat are only used by the app. Examples: JustTweak, AutomationTools, Shock Pinning in other modulesâ podspec: NOT APPLICABLE open-source pods donât appear in any podspec, those that do are called âopen-source sharedâ Pinning in other modulesâ Podfile (demo apps): PIN (e.g. AutomationTools in Orders demo appâs Podfile) Pinning in main appâs Podfile: PIN (e.g. AutomationTools) Open-Source shared pods The Just Eat pods we put open-source on github.com/justeat and are used by modules and apps. Examples: JustTrack, JustLog, ScrollingStackViewController, ErrorUtilities Pinning in other modulesâ podspec: PIN w/ optimistic operator (e.g. JustTrack in Orders) Pinning in other modulesâ Podfile (demo apps): PIN (e.g. JustTrack in Orders demo appâs Podfile) Pinning in main appâs Podfile: DONâT LIST latest compatible version is picked by CocoaPods (e.g. JustTrack). LIST & PIN if the pod is explicitly used in the app too, so we donât magically inherit it. Internal Domain pods Domain modules (yellow). Examples: Orders, SERP, etc. Pinning in other modulesâ podspec: NOT APPLICABLE domain pods donât appear in other podsâ podspecs (domain modules donât depend on other domain modules) Pinning in other modulesâ Podfile (demo apps): PIN only if the pod is used in the app code, rarely the case (e.g. Account in Orders demo appâs Podfile) Pinning in main appâs Podfile: PIN (e.g. Orders) Internal Core pods Core modules (blue) minus those open-source. Examples: APIClient, AssetProvider Pinning in other modulesâ podspec: NOT APPLICABLE core pods donât appear in other podsâ podspecs (core modules are only used in the app(s)) Pinning in other modulesâ Podfile (demo apps): PIN only if pod is used in the app code (e.g. APIClient in Orders demo appâs Podfile) Pinning in main appâs Podfile: PIN (e.g. NavigationEngine) Internal shared pods Shared modules (green) minus those open-source. Examples: JustUI, JustAnalytics Pinning in other modulesâ podspec: DONâT PIN (e.g. JustUI in Orders podspec) Pinning in other modulesâ Podfile (demo apps): PIN (e.g. JustUI in Orders demo appâs Podfile) Pinning in main appâs Podfile: PIN (e.g. JustUI) External shared pods Any non-Just Eat pod used by any internal or open-source pod. Examples: Usabilla, SDWebImage Pinning in other modulesâ podspec: PIN (e.g. Usabilla in Orders) Pinning in other modulesâ Podfile (demo apps): DONâT LIST because the version is forced by the podspec. LIST & PIN if the pod is explicitly used in the app too, so we donât magically inherit it. Pinning is irrelevant but good practice. Pinning in main appâs Podfile: DONâT LIST because the version is forced by the podspec(s). LIST & PIN if the pod is explicitly used in the app too, so we donât magically inherit it. Pinning is irrelevant but good practice. External pods Any non-Just Eat pod used by the app only. Examples: Instabug, GoogleAnalytics Pinning in other modulesâ podspec: NOT APPLICABLE external pods donât appear in any podspec, those that do are called âexternal sharedâ Pinning in other modulesâ Podfile (demo apps): PIN only if the pod is used in the app code, rarely the case (e.g. Promis) Pinning in main appâs Podfile: PIN (e.g. Adjust) Pinning is a good solution because it guarantees that we always build the same software regardless of new released versions of dependencies. Itâs also true that pinning every dependency all the time makes the dependency graph hard to keep updated. This is why we decided to allow some flexibility in some cases. Following is some more reasoning. Open-source For âopen-source sharedâ pods, we are optimistic enough (pun intended) to tolerate the usage of the optimistic operator ~> in podspecs of other pods (i.e Orders using JustTrack) so that when a new patch version is released, the consuming pod gets it for free upon running pod update. We have control over our code and, by respecting semantic versioning, we guarantee the consuming pod to always build. In case of new minor or major versions, we would have to update the podspecs of the consuming pods, which is appropriate. Also, we do need to list any âopen-source sharedâ pod in the main appâs Podfile only if directly used by the app code. External We donât have control over the âexternalâ and âexternal sharedâ pods, therefore we always pin the version in the appropriate place. New patch versions might not respect semantic versioning for real and we donât want to pull in new code unintentionally. As a rule of thumb, we prefer injecting external pods instead of creating a dependency in the podspec. Internal Internal shared pods could change frequently (not as much as domain modules). For this reason, weâve decided to relax a constraint we had and not to pin the version in the podspec. This might cause the consuming pod to break when a new version of an âinternal sharedâ pod is released and we run pod update. This is a compromise we can tolerate. The alternative would be to pin the version causing too much work to update the podspec of the domain modules. Continuous Integration changes With modules in separate repositories, the CI was quite simply replicating the same steps for each module: install pods run unit tests run UI tests generated code coverage submit code coverage to SonarQube Moving the modules to the monorepo meant creating smart CI pipelines that would run the same steps upon modulesâ changes. If a pull request is to change only app code, there is no need to run any step for the modules, just the usual steps for the app: If instead, a pull request applies changes to one or more modules, we want the pipeline to first run the steps for the modules, and then the steps for the app: Even if there are no changes in the app code, module changes could likely impact the app behaviour, so itâs important to always run the app tests. We have achieved the above setup through constructing our Jenkins pipelines dynamically. The solution should scale when new modules are added to the monorepo and for this reason, itâs important that all modules: respect the same project setup (generated by CocoaPods w/ the pod lib create command) use the same naming conventions for the test schemes (UnitTests/ContractTests/UITests) make use of Apple Test Plans are in the same location ( ./Modules/ folder). Following is an excerpt of the code that constructs the modulesâ stages from the Jenkinsfile used for pull request jobs. scripts = load "./Jenkins/scripts/scripts.groovy" def modifiedModules = scripts.modifiedModulesFromReferenceBranch(env.CHANGE_TARGET) def modulesThatNeedUpdating = scripts.modulesThatNeedUpdating(env.CHANGE_TARGET) def modulesToRun = (modulesThatNeedUpdating + modifiedModules).unique() sh "echo \"List of modules modified on this branch: ${modifiedModules}\"" sh "echo \"List of modules that need updating: ${modulesThatNeedUpdating}\"" sh "echo \"Pipeline will run the following modules: ${modulesToRun}\"" for (int i = 0; i < modulesToRun.size(); ++i) { def moduleName = modulesToRun[i] stage('Run pod install') { sh "bundle exec fastlane pod_install module:${moduleName}" } def schemes = scripts.testSchemesForModule(moduleName) schemes.each { scheme -> switch (scheme) { case "UnitTests": stage("${moduleName} Unit Tests") { sh "bundle exec fastlane module_unittests \ module_name:${moduleName} \ device:'${env.IPHONE_DEVICE}'" } stage("Generate ${moduleName} code coverage") { sh "bundle exec fastlane generate_sonarqube_coverage_xml" } stage("Submit ${moduleName} code coverage to SonarQube") { sh "bundle exec fastlane sonar_scanner_pull_request \ component_type:'module' \ source_branch:${env.BRANCH_NAME} \ target_branch:${env.CHANGE_TARGET} \ pull_id:${env.CHANGE_ID} \ project_key:'ios-${moduleName}' \ project_name:'iOS ${moduleName}' \ sources_path:'./Modules/${moduleName}/${moduleName}'" } break; case "ContractTests": stage('Install pact mock service') { sh "bundle exec fastlane install_pact_mock_service" } stage("${moduleName} Contract Tests") { sh "bundle exec fastlane module_contracttests \ module_name:${moduleName} \ device:'${env.IPHONE_DEVICE}'" } break; case "UITests": stage("${moduleName} UI Tests") { sh "bundle exec fastlane module_uitests \ module_name:${moduleName} \ number_of_simulators:${env.NUMBER_OF_SIMULATORS} \ device:'${env.IPHONE_DEVICE}'" } break; default: break; } } } and here are the helper functions to make it all work: def modifiedModulesFromReferenceBranch(String referenceBranch) { def script = "git diff --name-only remotes/origin/${referenceBranch}" def filesChanged = sh script: script, returnStdout: true Set modulesChanged = [] filesChanged.tokenize("\n").each { def components = it.split('/') if (components.size() > 1 && components[0] == 'Modules') { def module = components[1] modulesChanged.add(module) } } return modulesChanged } def modulesThatNeedUpdating(String referenceBranch) { def modifiedModules = modifiedModulesFromReferenceBranch(referenceBranch) def allModules = allMonorepoModules() def modulesThatNeedUpdating = [] for (module in allModules) { def podfileLockPath = "Modules/${module}/Example/Podfile.lock" def dependencies = podfileDependencies(podfileLockPath) def dependenciesIntersection = dependencies.intersect(modifiedModules) as TreeSet Boolean moduleNeedsUpdating = (dependenciesIntersection.size() > 0) if (moduleNeedsUpdating == true && modifiedModules.contains(module) == false) { modulesThatNeedUpdating.add(module) } } return modulesThatNeedUpdating } def podfileDependencies(String podfileLockPath) { def dependencies = [] def fileContent = readFile(file: podfileLockPath) fileContent.tokenize("\n").each { line -> def lineComponents = line.split('\\(') if (lineComponents.length > 1) { def dependencyLineSubComponents = lineComponents[0].split('-') if (dependencyLineSubComponents.length > 1) { def moduleName = dependencyLineSubComponents[1].trim() dependencies.add(moduleName) } } } return dependencies } def allMonorepoModules() { def modulesList = sh script: "ls Modules", returnStdout: true return modulesList.tokenize("\n").collect { it.trim() } } def testSchemesForModule(String moduleName) { def script = "xcodebuild -project ./Modules/${moduleName}/Example/${moduleName}.xcodeproj -list" def projectEntitites = sh script: script, returnStdout: true def schemesPart = projectEntitites.split('Schemes:')[1] def schemesPartLines = schemesPart.split(/\n/) def trimmedLined = schemesPartLines.collect { it.trim() } def filteredLines = trimmedLined.findAll { !it.allWhitespace } def allowedSchemes = ['UnitTests', 'ContractTests', 'UITests'] def testSchemes = filteredLines.findAll { allowedSchemes.contains(it) } return testSchemes } You might have noticed the modulesThatNeedUpdating method in the code above. Each module comes with a demo app using the dependencies listed in its Podfile and itâs possible that other monorepo modules are listed there as development pods. This not only means that we have to run the steps for the main app, but also the steps for every module consuming modules that show changes. For example, the Orders demo app uses APIClient, meaning that pull requests with changes in APIClient will generate pipelines including the Orders steps. Pipeline parallelization Something we initially thought was sensible to consider is the parallelisation of the pipelines across different nodes. We use parallelisation for the release pipelines and learned that, while it seems to be a fundamental requirement at first, it soon became apparent not to be so desirable nor truly fundamental for the pull requests pipeline. Weâll discuss our CI setup in a separate article, but suffice to say that we have aggressively optimized it and managed to reduce the agent pool from 10 to 5, still maintaining a good level of service. Parallelisation sensibly complicates the Jenkinsfiles and their maintainability, spreads the cost of checking out the repository across nodes and makes the logs harder to read. The main benefit would come from running the app UI tests on different nodes. In the WWDC session 413, Apple recommends generating the .xctestrun file using the build-for-testing option in xcodebuild and distribute it across the other nodes. Since our app is quite large, such file is also large and transferring it has its costs, both in time and bandwidth usage. All things considered, we decided to keep the majority of our pipelines serial. EDIT: In 2022 we have parallelised our PR pipeline in 4 branches: Validation steps (linting, Fastlane lanes tests, etc.) App unit tests App UI tests (short enough that there's no need to share .xctestrun across nodes) Modified modules unit tests Modified modules UI tests Conclusions We have used the setup described in this article since mid-2020 and we are very satisfied with it. We discussed the pipeline used for the pull requests which is the most relevant one when it comes to embracing a monorepo structure. We have a few more pipelines for various use cases, such as verifying changes in release branches, keeping the code coverage metrics up-to-date with jobs running of triggers, archiving the app for internal usage and for App Store. We hope to have given you some useful insights on how to structure a monorepo and its CI pipelines, especially if you have a structure similar to ours.

The algorithm powering iHarmony
- music
- chords
- scales
- iOS
- swift
- App Store
Problem
I wrote the first version of iHarmony in 2008. It was the very first iOS app I gave birth to, combining my passion for music and programming. I remember buying an iPhone and my first Mac with the precise purpose of jumping on the apps train at a time
Problem I wrote the first version of iHarmony in 2008. It was the very first iOS app I gave birth to, combining my passion for music and programming. I remember buying an iPhone and my first Mac with the precise purpose of jumping on the apps train at a time when it wasn't clear if the apps were there to stay or were just a temporary hype. But I did it, dropped my beloved Ubuntu to join a whole new galaxy. iHarmony was also one of the first 2000 apps on the App Store. Up until the recent rewrite, iHarmony was powered by a manually crafted database containing scales, chords, and harmonization I inputted. What-a-shame! I guess it made sense, I wanted to learn iOS and not to focus on implementing some core logic independent from the platform. Clearly a much better and less error-prone way to go would be to implement an algorithm to generate all the entries based on some DSL/spec. It took me almost 12 years to decide to tackle the problem and I've recently realized that writing the algorithm I wanted was harder than I thought. Also thought was a good idea give SwiftUI a try since the UI of iHarmony is extremely simple but... nope. Since someone on the Internet expressed interest đ, I wrote this article to explain how I solved the problem of modeling music theory concepts in a way that allows the generation of any sort of scales, chords, and harmonization. I only show the code needed to get a grasp of the overall structure. I know there are other solutions ready to be used on GitHub but, while I don't particularly like any of them, the point of rewriting iHarmony from scratch was to challenge myself, not to reuse code someone else wrote. Surprisingly to me, getting to the solution described here took me 3 rewrites and 2 weeks. Solution The first fundamental building blocks to model are surely the musical notes, which are made up of a natural note and an accidental. enum NaturalNote: String { case C, D, E, F, G, A, B } enum Accidental: String { case flatFlatFlat = "bbb" case flatFlat = "bb" case flat = "b" case natural = "" case sharp = "#" case sharpSharp = "##" case sharpSharpSharp = "###" func applyAccidental(_ accidental: Accidental) throws -> Accidental {...} } struct Note: Hashable, Equatable { let naturalNote: NaturalNote let accidental: Accidental ... static let Dff = Note(naturalNote: .D, accidental: .flatFlat) static let Df = Note(naturalNote: .D, accidental: .flat) static let D = Note(naturalNote: .D, accidental: .natural) static let Ds = Note(naturalNote: .D, accidental: .sharp) static let Dss = Note(naturalNote: .D, accidental: .sharpSharp) ... func noteByApplyingAccidental(_ accidental: Accidental) throws -> Note {...} } Combinations of notes make up scales and chords and they are... many. What's fixed instead in music theory, and therefore can be hard-coded, are the keys (both major and minor) such as: C major: C, D, E, F, G, A, B A minor: A, B, C, D, E, F, G D major: D, E, F#, G, A, B, C# We'll get back to the keys later, but we can surely implement the note sequence for each musical key. typealias NoteSequence = [Note] extension NoteSequence { static let C = [Note.C, Note.D, Note.E, Note.F, Note.G, Note.A, Note.B] static let A_min = [Note.A, Note.B, Note.C, Note.D, Note.E, Note.F, Note.G] static let G = [Note.G, Note.A, Note.B, Note.C, Note.D, Note.E, Note.Fs] static let E_min = [Note.E, Note.Fs, Note.G, Note.A, Note.B, Note.C, Note.D] ... } Next stop: intervals. They are a bit more interesting as not every degree has the same types. Let's split into 2 sets: 2nd, 3rd, 6th and 7th degrees can be minor, major, diminished and augmented 1st (and 8th), 4th and 5th degrees can be perfect, diminished and augmented. We need to use different kinds of "diminished" and "augmented" for the 2 sets as later on we'll have to calculate the accidentals needed to turn an interval into another. Some examples: to get from 2nd augmented to 2nd diminished, we need a triple flat accidental (e.g. in C major scale, from D⯠to Dââ there are 3 semitones) to get from 5th augmented to 5th diminished, we need a double flat accidental (e.g. in C major scale, from G⯠to Gâthere are 2 semitones) We proceed to hard-code the allowed intervals in music, leaving out the invalid ones (e.g. Interval(degree: ._2, type: .augmented)) enum Degree: Int, CaseIterable { case _1, _2, _3, _4, _5, _6, _7, _8 } enum IntervalType: Int, RawRepresentable { case perfect case minor case major case diminished case augmented case minorMajorDiminished case minorMajorAugmented } struct Interval: Hashable, Equatable { let degree: Degree let type: IntervalType static let _1dim = Interval(degree: ._1, type: .diminished) static let _1 = Interval(degree: ._1, type: .perfect) static let _1aug = Interval(degree: ._1, type: .augmented) static let _2dim = Interval(degree: ._2, type: .minorMajorDiminished) static let _2min = Interval(degree: ._2, type: .minor) static let _2maj = Interval(degree: ._2, type: .major) static let _2aug = Interval(degree: ._2, type: .minorMajorAugmented) ... static let _4dim = Interval(degree: ._4, type: .diminished) static let _4 = Interval(degree: ._4, type: .perfect) static let _4aug = Interval(degree: ._4, type: .augmented) ... static let _7dim = Interval(degree: ._7, type: .minorMajorDiminished) static let _7min = Interval(degree: ._7, type: .minor) static let _7maj = Interval(degree: ._7, type: .major) static let _7aug = Interval(degree: ._7, type: .minorMajorAugmented) } Now it's time to model the keys (we touched on them above already). What's important is to define the intervals for all of them (major and minor ones). enum Key { // natural case C, A_min // sharp case G, E_min case D, B_min case A, Fs_min case E, Cs_min case B, Gs_min case Fs, Ds_min case Cs, As_min // flat case F, D_min case Bf, G_min case Ef, C_min case Af, F_min case Df, Bf_min case Gf, Ef_min case Cf, Af_min ... enum KeyType { case naturalMajor case naturalMinor case flatMajor case flatMinor case sharpMajor case sharpMinor } var type: KeyType { switch self { case .C: return .naturalMajor case .A_min: return .naturalMinor case .G, .D, .A, .E, .B, .Fs, .Cs: return .sharpMajor case .E_min, .B_min, .Fs_min, .Cs_min, .Gs_min, .Ds_min, .As_min: return .sharpMinor case .F, .Bf, .Ef, .Af, .Df, .Gf, .Cf: return .flatMajor case .D_min, .G_min, .C_min, .F_min, .Bf_min, .Ef_min, .Af_min: return .flatMinor } } var intervals: [Interval] { switch type { case .naturalMajor, .flatMajor, .sharpMajor: return [ ._1, ._2maj, ._3maj, ._4, ._5, ._6maj, ._7maj ] case .naturalMinor, .flatMinor, .sharpMinor: return [ ._1, ._2maj, ._3min, ._4, ._5, ._6min, ._7min ] } } var notes: NoteSequence { switch self { case .C: return .C case .A_min: return .A_min ... } } At this point we have all the fundamental building blocks and we can proceed with the implementation of the algorithm. The idea is to have a function that given a key a root interval a list of intervals it works out the list of notes. In terms of inputs, it seems the above is all we need to correctly work out scales, chords, and - by extension - also harmonizations. Mind that the root interval doesn't have to be part of the list of intervals, that is simply the interval to start from based on the given key. Giving a note as a starting point is not good enough since some scales simply don't exist for some notes (e.g. G⯠major scale doesn't exist in the major key, and Gâminor scale doesn't exist in any minor key). Before progressing to the implementation, please consider the following unit tests that should make sense to you: func test_noteSequence_C_1() { let key: Key = .C let noteSequence = try! engine.noteSequence(customKey: key.associatedCustomKey, intervals: [._1, ._2maj, ._3maj, ._4, ._5, ._6maj, ._7maj]) let expectedValue: NoteSequence = [.C, .D, .E, .F, .G, .A, .B] XCTAssertEqual(noteSequence, expectedValue) } func test_noteSequence_withRoot_C_3maj_majorScaleIntervals() { let key = Key.C let noteSequence = try! engine.noteSequence(customKey: key.associatedCustomKey, rootInterval: ._3maj, intervals: [._1, ._2maj, ._3maj, ._4, ._5, ._6maj, ._7maj]) let expectedValue: NoteSequence = [.E, .Fs, .Gs, .A, .B, .Cs, .Ds] XCTAssertEqual(noteSequence, expectedValue) } func test_noteSequence_withRoot_Gsmin_3maj_alteredScaleIntervals() { let key = Key.Gs_min let noteSequence = try! engine.noteSequence(customKey: key.associatedCustomKey, rootInterval: ._3maj, intervals: [._1aug, ._2maj, ._3dim, ._4dim, ._5aug, ._6dim, ._7dim]) let expectedValue: NoteSequence = [.Bs, .Cs, .Df, .Ef, .Fss, .Gf, .Af] XCTAssertEqual(noteSequence, expectedValue) } and here is the implementation. Let's consider a simple case, so it's easier to follow: key = C major root interval = 3maj interval = major scale interval (1, 2maj, 3min, 4, 5, 6maj, 7min) if you music theory allowed you to understand the above unit tests, you would expect the output to be: E, FâŻ, G, A, B, CâŻ, D (which is a Dorian scale). Steps: we start by shifting the notes of the C key to position the 3rd degree (based on the 3maj) as the first element of the array, getting the note sequence E, F, G, A, B, C, D; here's the first interesting bit: we then get the list of intervals by calculating the number of semitones from the root to any other note in the sequence and working out the corresponding Interval: 1_perfect, 2_minor, 3_minor, 4_perfect, 5_perfect, 6_minor, 7_minor; we now have all we need to create a CustomKey which is pretty much a Key (with notes and intervals) but instead of being an enum with pre-defined values, is a struct; here's the second tricky part: return the notes by mapping the input intervals. Applying to each note in the custom key the accidental needed to match the desired interval. In our case, the only 2 intervals to 'adjust' are the 2nd and the 6th intervals, both minor in the custom key but major in the list of intervals. So we have to apply a sharp accidental to 'correct' them. đ I've used force unwraps in these examples for simplicity, the code might already look complex by itself. class CoreEngine { func noteSequence(customKey: CustomKey, rootInterval: Interval = ._1, intervals: [Interval]) throws -> NoteSequence { // 1. let noteSequence = customKey.shiftedNotes(by: rootInterval.degree) let firstNoteInShiftedSequence = noteSequence.first! // 2. let adjustedIntervals = try noteSequence.enumerated().map { try interval(from: firstNoteInShiftedSequence, to: $1, targetDegree: Degree(rawValue: $0)!) } // 3. let customKey = CustomKey(notes: noteSequence, intervals: adjustedIntervals) // 4. return try intervals.map { let referenceInterval = customKey.firstIntervalWithDegree($0.degree)! let note = customKey.notes[$0.degree.rawValue] let accidental = try referenceInterval.type.accidental(to: $0.type) return try note.noteByApplyingAccidental(accidental) } } } It's worth showing the implementation of the methods used above: private func numberOfSemitones(from sourceNote: Note, to targetNote: Note) -> Int { let notesGroupedBySameTone: [[Note]] = [ [.C, .Bs, .Dff], [.Cs, .Df, .Bss], [.D, .Eff, .Css], [.Ds, .Ef, .Fff], [.E, .Dss, .Ff], [.F, .Es, .Gff], [.Fs, .Ess, .Gf], [.G, .Fss, .Aff], [.Gs, .Af], [.A, .Gss, .Bff], [.As, .Bf, .Cff], [.B, .Cf, .Ass] ] let startIndex = notesGroupedBySameTone.firstIndex { $0.contains(sourceNote)}! let endIndex = notesGroupedBySameTone.firstIndex { $0.contains(targetNote)}! return endIndex >= startIndex ? endIndex - startIndex : (notesGroupedBySameTone.count - startIndex) + endIndex } private func interval(from sourceNote: Note, to targetNote: Note, targetDegree: Degree) throws -> Interval { let semitones = numberOfSemitones(from: sourceNote, to: targetNote) let targetType: IntervalType = try { switch targetDegree { case ._1, ._8: return .perfect ... case ._4: switch semitones { case 4: return .diminished case 5: return .perfect case 6: return .augmented default: throw CustomError.invalidConfiguration ... case ._7: switch semitones { case 9: return .minorMajorDiminished case 10: return .minor case 11: return .major case 0: return .minorMajorAugmented default: throw CustomError.invalidConfiguration } } }() return Interval(degree: targetDegree, type: targetType) } the Note's noteByApplyingAccidental method: func noteByApplyingAccidental(_ accidental: Accidental) throws -> Note { let newAccidental = try self.accidental.apply(accidental) return Note(naturalNote: naturalNote, accidental: newAccidental) } and the Accidental's apply method: func apply(_ accidental: Accidental) throws -> Accidental { switch (self, accidental) { ... case (.flat, .flatFlatFlat): throw CustomError.invalidApplicationOfAccidental case (.flat, .flatFlat): return .flatFlatFlat case (.flat, .flat): return .flatFlat case (.flat, .natural): return .flat case (.flat, .sharp): return .natural case (.flat, .sharpSharp): return .sharp case (.flat, .sharpSharpSharp): return .sharpSharp case (.natural, .flatFlatFlat): return .flatFlatFlat case (.natural, .flatFlat): return .flatFlat case (.natural, .flat): return .flat case (.natural, .natural): return .natural case (.natural, .sharp): return .sharp case (.natural, .sharpSharp): return .sharpSharp case (.natural, .sharpSharpSharp): return .sharpSharpSharp ... } With the above engine ready (and đŻïčȘ unit tested!), we can now proceed to use it to work out what we ultimately need (scales, chords, and harmonizations). extension CoreEngine { func scale(note: Note, scaleIdentifier: Identifier) throws -> NoteSequence {...} func chord(note: Note, chordIdentifier: Identifier) throws -> NoteSequence {...} func harmonization(key: Key, harmonizationIdentifier: Identifier) throws -> NoteSequence {...} func chordSignatures(note: Note, scaleHarmonizationIdentifier: Identifier) throws -> [ChordSignature] {...} func harmonizations(note: Note, scaleHarmonizationIdentifier: Identifier) throws -> [NoteSequence] {...} } Conclusions There's more to it but with this post I only wanted to outline the overall idea. The default database is available on GitHub at albertodebortoli/iHarmonyDB. The format used is JSON and the community can now easily suggest additions. Here is how the definition of a scale looks: "scale_dorian": { "group": "group_scales_majorModes", "isMode": true, "degreeRelativeToMain": 2, "inclination": "minor", "intervals": [ "1", "2maj", "3min", "4", "5", "6maj", "7min" ] } and a chord: "chord_diminished": { "group": "group_chords_diminished", "abbreviation": "dim", "intervals": [ "1", "3min", "5dim" ] } and a harmonization: "scaleHarmonization_harmonicMajorScale4Tones": { "group": "group_harmonization_harmonic_major", "inclination": "major", "harmonizations": [ "harmonization_1_major7plus", "harmonization_2maj_minor7dim5", "harmonization_3maj_minor7", "harmonization_4_minor7plus", "harmonization_5_major7", "harmonization_6min_major7plus5sharp", "harmonization_7maj_diminished7" ] } Have to say, I'm pretty satisfied with how extensible this turned out to be. Thanks for reading đ¶

The iOS internationalization basics I keep forgetting
- iOS
- formatting
- date
- currency
- timezone
- locale
- language
Localizations, locales, timezones, date and currency formatting... it's shocking how easy is to forget how they work and how to use them correctly. In this article, I try to summarize the bare minimum one needs to know to add internationalization support to an iOS app.
In this article, I try to summarize the bare minimum one needs to know to add internationalization support to an iOS app. Localizations, locales, timezones, date and currency formatting... it's shocking how easy is to forget how they work and how to use them correctly. After years more than 10 years into iOS development, I decided to write down a few notes on the matter, with the hope that they will come handy again in the future, hopefully not only to me. TL;DR From Apple docs: Date: a specific point in time, independent of any calendar or time zone; TimeZone: information about standard time conventions associated with a specific geopolitical region; Locale: information about linguistic, cultural, and technological conventions for use in formatting data for presentation. Rule of thumb: All DateFormatters should use the locale and the timezone of the device; All NumberFormatter, in particular those with numberStyle set to .currency (for the sake of this article) should use a specific locale so that prices are not shown in the wrong currency. General notes on formatters Let's start by stating the obvious. Since iOS 10, Foundation (finally) provides ISO8601DateFormatter, which, alongside with DateFormatter and NumberFormatter, inherits from Formatter. Formatter locale property timeZone property ISO8601DateFormatter â â DateFormatter â â NumberFormatter â â In an app that only consumes data from an API, the main purpose of ISO8601DateFormatter is to convert strings to dates (String -> Date) more than the inverse. DateFormatter is then used to format dates (Date -> String) to ultimately show the values in the UI. NumberFormatter instead, converts numbers (prices in the vast majority of the cases) to strings (NSNumber/Decimal -> String). Formatting dates đ đ đ It seems the following 4 are amongst the most common ISO 8601 formats, including the optional UTC offset. A: 2019-10-02T16:53:42 B: 2019-10-02T16:53:42Z C: 2019-10-02T16:53:42-02:00 D: 2019-10-02T16:53:42.974Z In this article I'll stick to these formats. The 'Z' at the end of an ISO8601 date indicates that it is in UTC, not a local time zone. Locales Converting strings to dates (String -> Date) is done using ISO8601DateFormatter objects set up with various formatOptions. Once we have a Date object, we can deal with the formatting for the presentation. Here, the locale is important and things can get a bit tricky. Locales have nothing to do with timezones, locales are for applying a format using a language/region. Locale identifiers are in the form of <language_identifier>_<region_identifier> (e.g. en_GB). We should use the user's locale when formatting dates (Date -> String). Consider a British user moving to Italy, the apps should keep showing a UI localized in English, and the same applies to the dates that should be formatted using the en_GB locale. Using the it_IT locale would show "2 ott 2019, 17:53" instead of the correct "2 Oct 2019 at 17:53". Locale.current, shows the locale set (overridden) in the iOS simulator and setting the language and regions in the scheme's options comes handy for debugging. Some might think that it's acceptable to use Locale.preferredLanguages.first and create a Locale from it with let preferredLanguageLocale = Locale(identifier: Locale.preferredLanguages.first!) and set it on the formatters. I think that doing so is not great since we would display dates using the Italian format but we won't necessarily be using the Italian language for the other UI elements as the app might not have the IT localization, causing an inconsistent experience. In short: don't use preferredLanguages, best to use Locale.current. Apple strongly suggests using en_US_POSIX pretty much everywhere (1, 2). From Apple docs: [...] if you're working with fixed-format dates, you should first set the locale of the date formatter to something appropriate for your fixed format. In most cases the best locale to choose is "en_US_POSIX", a locale that's specifically designed to yield US English results regardless of both user and system preferences. "en_US_POSIX" is also invariant in time (if the US, at some point in the future, changes the way it formats dates, "en_US" will change to reflect the new behaviour, but "en_US_POSIX" will not), and between machines ("en_US_POSIX" works the same on iOS as it does on OS X, and as it it does on other platforms). Once you've set "en_US_POSIX" as the locale of the date formatter, you can then set the date format string and the date formatter will behave consistently for all users. I couldn't find a really valid reason for doing so and quite frankly using the device locale seems more appropriate for converting dates to strings. Here is the string representation for the same date using different locales: en_US_POSIX: May 2, 2019 at 3:53 PM en_GB: 2 May 2019 at 15:53 it_IT: 2 mag 2019, 15:53 The above should be enough to show that en_US_POSIX is not what we want to use in this case, but it has more to do with maintaining a standard for communication across machines. From this article: "[...] Unless you specifically need month and/or weekday names to appear in the user's language, you should always use the special locale of en_US_POSIX. This will ensure your fixed format is actually fully honored and no user settings override your format. This also ensures month and weekday names appear in English. Without using this special locale, you may get 24-hour format even if you specify 12-hour (or visa-versa). And dates sent to a server almost always need to be in English." Timezones Stating the obvious one more time: Greenwich Mean Time (GMT) is a time zone while Coordinated Universal Time (UTC) is a time standard. There is no time difference between them. Timezones are fundamental to show the correct date/time in the final text shown to the user. The timezone value is taken from macOS and the iOS simulator inherits it, meaning that printing TimeZone.current, shows the timezone set in the macOS preferences (e.g. Europe/Berlin). Show me some code Note that in the following example, we use GMT (Greenwich Mean Time) and CET (Central European Time), which is GMT+1. Mind that it's best to reuse formatters since the creation is expensive. class CustomDateFormatter { private let dateFormatter: DateFormatter = { let dateFormatter = DateFormatter() dateFormatter.dateStyle = .medium dateFormatter.timeStyle = .short return dateFormatter }() private let locale: Locale private let timeZone: TimeZone init(locale: Locale = .current, timeZone: TimeZone = .current) { self.locale = locale self.timeZone = timeZone } func string(from date: Date) -> String { dateFormatter.locale = locale dateFormatter.timeZone = timeZone return dateFormatter.string(from: date) } } let stringA = "2019-11-02T16:53:42" let stringB = "2019-11-02T16:53:42Z" let stringC = "2019-11-02T16:53:42-02:00" let stringD = "2019-11-02T16:53:42.974Z" // The ISO8601DateFormatter's extension (redacted) // internally uses multiple formatters, each one set up with different // options (.withInternetDateTime, .withFractionalSeconds, withFullDate, .withTime, .withColonSeparatorInTime) // to be able to parse all the formats. // timeZone property is set to GMT. let dateA = ISO8601DateFormatter.date(from: stringA)! let dateB = ISO8601DateFormatter.date(from: stringB)! let dateC = ISO8601DateFormatter.date(from: stringC)! let dateD = ISO8601DateFormatter.date(from: stringD)! var dateFormatter = CustomDateFormatter(locale: Locale(identifier: "en_GB"), timeZone: TimeZone(identifier: "GMT")!) dateFormatter.string(from: dateA) // 2 Nov 2019 at 16:53 dateFormatter.string(from: dateB) // 2 Nov 2019 at 16:53 dateFormatter.string(from: dateC) // 2 Nov 2019 at 18:53 dateFormatter.string(from: dateD) // 2 Nov 2019 at 16:53 dateFormatter = CustomDateFormatter(locale: Locale(identifier: "it_IT"), timeZone: TimeZone(identifier: "CET")!) dateFormatter.string(from: dateA) // 2 nov 2019, 17:53 dateFormatter.string(from: dateB) // 2 nov 2019, 17:53 dateFormatter.string(from: dateC) // 2 nov 2019, 19:53 dateFormatter.string(from: dateD) // 2 nov 2019, 17:53 Using the CET timezone also for ISO8601DateFormatter, the final string produced for dateA would respectively be "15:53" when formatted with GMT and "16:53" when formatted with CET. As long as the string passed to ISO8601DateFormatter is in UTC, it's irrelevant to set the timezone on the formatter. Apple suggests to set the timeZone property to UTC with TimeZone(secondsFromGMT: 0), but this is irrelevant if the string representing the date already includes the timezone. If your server returns a string representing a date that is not in UTC, it's probably because of one of the following 2 reasons: it's not meant to be in UTC (questionable design decision indeed) and therefore the timezone of the device should be used instead; the backend developers implemented it wrong and they should add the 'Z 'at the end of the string if what they intended is to have the date in UTC. In short: All DateFormatters should have timezone and locale set to .current and avoid handling non-UTC string if possible. Formatting currencies ⏠$ „ ÂŁ The currency symbol and the formatting of a number should be defined via a Locale, and they shouldn't be set/changed on the NumberFormatter. Don't use the user's locale (Locale.current) because it could be set to a region not supported by the app. Let's consider the example of a user's locale to be en_US, and the app to be available only for the Italian market. We must set a locale Locale(identifier: "it_IT") on the formatter, so that: prices will be shown only in Euro (not American Dollar) the format used will be the one of the country language (for Italy, "12,34 âŹ", not any other variation such as "âŹ12.34") class CurrencyFormatter { private let locale: Locale init(locale: Locale = .current) { self.locale = locale } func string(from decimal: Decimal, overriddenCurrencySymbol: String? = nil) -> String { let formatter = NumberFormatter() formatter.numberStyle = .currency if let currencySymbol = overriddenCurrencySymbol { // no point in doing this on a NumberFormatter â formatter.currencySymbol = currencySymbol } formatter.locale = locale return formatter.string(from: decimal as NSNumber)! } } let itCurrencyFormatter = CurrencyFormatter(locale: Locale(identifier: "it_IT")) let usCurrencyFormatter = CurrencyFormatter(locale: Locale(identifier: "en_US")) let price1 = itCurrencyFormatter.string(from: 12.34) // "12,34 âŹ" â let price2 = usCurrencyFormatter.string(from: 12.34) // "$12.34" â let price3 = itCurrencyFormatter.string(from: 12.34, overriddenCurrencySymbol: "âż") // "12,34 âż" â let price4 = usCurrencyFormatter.string(from: 12.34, overriddenCurrencySymbol: "âż") // "âż 12.34" â In short: All NumberFormatters should have the locale set to the one of the country targeted and no currencySymbol property overridden (it's inherited from the locale). Languages đŹđ§ đźđč đłđ± Stating the obvious one more time, but there are very rare occasions that justify forcing the language in the app: func setLanguage(_ language: String) { let userDefaults = UserDefaults.standard userDefaults.set([language], forKey: "AppleLanguages") } The above circumvents the Apple localization mechanism and needs an app restart, so don't do it and localize the app by the book: add localizations in Project -> Localizations; create a Localizable.strings file and tap the localize button in the inspector; always use NSLocalizedString() in code. Let's consider this content of Localizable.strings (English): "kHello" = "Hello"; "kFormatting" = "Some formatting 1. %@ 2. %d."; and this for another language (e.g. Italian) Localizable.strings (Italian): "kHello" = "Ciao"; "kFormatting" = "Esempio di formattazione 1) %@ 2) %d."; Simple localization Here's the trivial example: let localizedString = NSLocalizedString("kHello", comment: "") If Locale.current.languageCode is it, the value would be 'Ciao', and 'Hello' otherwise. Formatted localization For formatted strings, use the following: let stringWithFormats = NSLocalizedString("kFormatting", comment: "") String.localizedStringWithFormat(stringWithFormats, "some value", 3) As before, if Locale.current.languageCode is it, value would be 'Esempio di formattazione 1) some value 2) 3.', and 'Some formatting 1) some value 2) 3.' otherwise. Plurals localization For plurals, create a Localizable.stringsdict file and tap the localize button in the inspector. Localizable.strings and Localizable.stringsdict are independent, so there are no cross-references (something that often tricked me). Here is a sample content: <dict> <key>kPlurality</key> <dict> <key>NSStringLocalizedFormatKey</key> <string>Interpolated string: %@, interpolated number: %d, interpolated variable: %#@COUNT@.</string> <key>COUNT</key> <dict> <key>NSStringFormatSpecTypeKey</key> <string>NSStringPluralRuleType</string> <key>NSStringFormatValueTypeKey</key> <string>d</string> <key>zero</key> <string>nothing</string> <key>one</key> <string>%d object</string> <key>two</key> <string></string> <key>few</key> <string></string> <key>many</key> <string></string> <key>other</key> <string>%d objects</string> </dict> </dict> </dict> Localizable.stringsdict undergo the same localization mechanism of its companion Localizable.strings. It's mandatory to only implement 'other', but an honest minimum includes 'zero', 'one', and 'other'. Given the above content, the following code should be self-explanatory: let localizedHello = NSLocalizedString("kHello", comment: "") // from Localizable.strings let stringWithPlurals = NSLocalizedString("kPlurality", comment: "") // from Localizable.stringsdict String.localizedStringWithFormat(stringWithPlurals, localizedHello, 42, 1) With the en language, the value would be 'Interpolated string: Hello, interpolated number: 42, interpolated variable: 1 object.'. Use the scheme's option to run with a specific Application Language (it will change the current locale language and therefore also the output of the DateFormatters). If the language we've set or the device language are not supported by the app, the system falls back to en. References https://en.wikipedia.org/wiki/ISO_8601 https://nsdateformatter.com/ https://foragoodstrftime.com/ https://epochconverter.com/ So... that's all folks. đ

Modular iOS Architecture @ Just Eat
- iOS
- Just Eat
- architecture
- modulrization
- Cocoapods
The journey towards a modular architecture taken by the Just Eat iOS team.
The journey we took to restructure our mobile apps towards a modular architecture. Originally published on the Just Eat Engineering Blog. Overview Modular mobile architectures have been a hot topic over the past 2 years, counting a plethora of articles and conference talks. Almost every big company promoted and discussed modularization publicly as a way to scale big projects. At Just Eat, we jumped on the modular architecture train probably before it was mainstream and, as we'll discuss in this article, the root motivation was quite peculiar in the industry. Over the years (2016-2019), we've completely revamped our iOS products from the ground up and learned a lot during this exciting and challenging journey. There is so much to say about the way we structured our iOS stack that it would probably deserve a series of articles, some of which have previously been posted. Here we summarize the high-level iOS architecture we crafted, covering the main aspects in a way concise enough for the reader to get a grasp of them and hopefully learn some valuable tips. Modular Architecture Lots of information can be found online on modular architectures. In short: A modular architecture is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each one contains everything necessary to execute only one aspect of the desired functionality. Note that modular design applies to the code you own. A project with several third-party dependencies but no sensible separation for the code written by your team is not considered modular. A modular design is more about the principle rather than the specific technology. One could achieve it in a variety of ways and with different tools. Here are some key points and examples that should inform the decision of the ifs and the hows of implementing modularization: Business reasons The company requires that parts of the codebase are reused and shared across projects, products, and teams; The company requires multiple products to be unified into a single one. Tech reasons The codebase has grown to a state where things become harder and harder to maintain and to iterate over; Development is slowed down due to multiple developers working on the same monolithic codebase; Besides reusing code, you need to port functionalities across projects/products. Multiple teams The company structured teams following strategic models (e.g. Spotify model) and functional teams only work on a subset of the final product; Ownership of small independent modules distributed across teams enables faster iterations; The much smaller cognitive overhead of working on a smaller part of the whole product can vastly simplify the overall development. Pre-existing knowledge Members of the team might already be familiar with specific solutions (Carthage, CocoaPods, Swift Package Manager, manual frameworks setup within Xcode). In the case of a specific familiarity with a system, it's recommended to start with it since all solutions come with pros and cons and there's not a clear winner at the time of writing. Modularizing code (if done sensibly) is almost always a good thing: it enforces separation of concerns, keeps complexity under control, allows faster development, etc. It has to be said that it's not necessarily what one needs for small projects and its benefits become tangible only after a certain complexity threshold is crossed. Journey to a new architecture In 2014, Just Eat was a completely different environment from today and back then the business decided to split the tech department into separate departments: one for UK and one for the other countries. While this was done with the best intentions to allow faster evolution in the main market (UK), it quickly created a hard division between teams, services, and people. In less than 6 months, the UK and International APIs and consumer clients deeply diverged introducing country-specific logic and behaviors. By mid-2016 the intent of "merging back" into a single global platform was internally announced and at that time it almost felt like a company acquisition. This is when we learned the importance of integrating people before technology. The teams didnât know each other very well and became reasonably territorial on their codebase. It didnât help that the teams span multiple cities. It's understandable that getting to an agreement on how going back to a single, global, and unified platform took months. The options we considered spanned from rewriting the product from scratch to picking one of the two existing ones and make it global. A complete rewrite would have eventually turned out to be a big-bang release with the risk of regressions being too high; not something sensible or safe to pursue. Picking one codebase over the other would have necessarily let down one of the two teams and caused the re-implementation of some missing features present in the other codebase. At that time, the UK project was in a better shape and new features were developed for the UK market first. The international project was a bit behind due to the extra complexity of supporting multiple countries and features being too market-specific. During that time, the company was also undergoing massive growth and with multiple functional teams having been created internally, there was an increasing need to move towards modularization. Therefore, we decided to gradually and strategically modularize parts of the mobile products and onboard them onto the other codebase in a controlled and safe way. In doing so, we took the opportunity to deeply refactor and, in the vast majority of the cases, rewrite parts in their entirety enabling new designs, better tests, higher code coverage, and - holistically - a fully Swift codebase. We knew that the best way to refactor and clean up the code was by following a bottom-up approach. We started with the foundations to solve small and well-defined problems - such as logging, tracking, theming - enabling the team to learn to think modular. We later moved to isolating big chunks of code into functional modules to be able to onboard them into the companion codebase and ship them on a phased rollout. We soon realized we needed a solid engine to handle run-time configurations and remote feature flagging to allow switching ON and OFF features as well as entire modules. As discussed in a previous article, we developed JustTweak to achieve this goal. At the end of the journey, the UK and the International projects would look very similar, sharing a number of customizable modules, and differing only in the orchestration layer in the apps. The Just Eat iOS apps are far bigger and more complex than they might look at first glance. Generically speaking, merging different codebases takes orders of magnitude longer than separating them, and for us, it was a process that took over 3 years, being possible thanks to unparalleled efforts of engineers brought to work together. Over this time, the whole team learned a lot, from the basics of developing code in isolation to how to scale a complex system. Holistic Design đ€ The following diagram outlines the modular architecture in its entirety as it is at the time of writing this article (December 2019). We can appreciate a fair number of modules clustered by type and the different consumer apps. Modular iOS architecture - holistic design Whenever possible, we took the opportunity to abstract some modules having them in a state that allows open-sourcing the code. All of our open-source modules are licensed under Apache 2 and can be found at github.com/justeat. Apps Due to the history of Just Eat described above, we build different apps per country per brand from different codebases All the modularization work we did bottom-up brought us to a place where the apps differ only in the layer orchestrating the modules. With all the consumer-facing features been moved to the domain modules, there is very little code left in the apps. Domain Modules Domain modules contain features specific to an area of the product. As the diagram above shows, the sum of all those parts makes up the Just Eat apps. These modules are constantly modified and improved by our teams and updating the consumer apps to use newer versions is an explicit action. We don't particularly care about backward compatibility here since we are the sole consumers and it's common to break the public interface quite often if necessary. It might seem at first that domain modules should depend on some Core modules (e.g. APIClient) but doing so would complicate the dependency tree as we'll discuss further in the "Dependency Management" section of this article. Instead, we inject core modules' services, simply making them conformant to protocols defined in the domain module. In this way, we maintain a good abstraction and avoid tangling the dependency graph. Core & Shared modules The Core and Shared modules represent the foundations of our stack, things like: custom UI framework theming engine logging, tracking, and analytics libraries test utilities client for all the Just Eat APIs feature flagging and experimentation engine and so forth. These modules - which are sometimes also made open-source - should not change frequently due to their nature. Here backward compatibility is important and we deprecate old APIs when introducing new ones. Both apps and domain modules can have shared modules as dependencies, while core modules can only be used by the apps. Updating the backbone of a system requires the propagation of the changes up in the stack (with its maintenance costs) and for this reason, we try to keep the number of shared modules very limited. Structure of a module As we touched on in previous articles, one of our fundamental principles is "always strive to find solutions to problems that are scalable and hide complexity as much as possible". We are almost obsessed with making things as simple as they can be. When building a module, our root principle is: Every module should be well tested, maintainable, readable, easily pluggable, and reasonably documented. The order of the adjectives implies some sort of priority. First of all, the code must be unit tested, and in the case of domain modules, UI tests are required too. Without reasonable code coverage, no code is shipped to production. This is the first step to code maintainability, where maintainable code is intended as "code that is easy to modify or extend". Readability is down to reasonable design, naming convention, coding standards, formatting, and all that jazz. Every module exposes a Facade that is very succinct, usually no more than 200 lines long. This entry point is what makes a module easily pluggable. In our module blueprint, the bare minimum is a combination of a facade class, injected dependencies, and one or more configuration objects driving the behavior of the module (leveraging the underlying feature flagging system powered by JustTweak discussed in a previous article). The facade should be all a developer needs to know in order to consume a module without having to look at implementation details. Just to give you an idea, here is an excerpt from the generated public interface of the Account module (not including the protocols): public typealias PasswordManagementService = ForgottenPasswordServiceProtocol & ResetPasswordServiceProtocol public typealias AuthenticationService = LoginServiceProtocol & SignUpServiceProtocol & PasswordManagementService & RecaptchaServiceProtocol public typealias UserAccountService = AccountInfoServiceProtocol & ChangePasswordServiceProtocol & ForgottenPasswordServiceProtocol & AccountCreditServiceProtocol public class AccountModule { public init(settings: Settings, authenticationService: AuthenticationService, userAccountService: UserAccountService, socialLoginServices: [SocialLoginService], userInfoProvider: UserInfoProvider) public func startLogin(on viewController: UIViewController) -> FlowCoordinator public func startResetPassword(on viewController: UIViewController, token: Token) -> FlowCoordinator public func startAccountInfo(on navigationController: UINavigationController) -> FlowCoordinator public func startAccountCredit(on navigationController: UINavigationController) -> FlowCoordinator public func loginUsingSharedWebCredentials(handler: @escaping (LoginResult) -> Void) } Domain module public interface example (Account module) We believe code should be self-descriptive and we tend to put comments only on code that really deserves some explanation, very much embracing John Ousterhout's approach described in A Philosophy of Software Design. Documentation is mainly relegated to the README file and we treat every module as if it was an open-source project: the first thing consumers would look at is the README file, and so we make it as descriptive as possible. Overall design We generate all our modules using CocoaPods via $ pod lib create which creates the project with a standard template generating the Podfile, podspec, and demo app in a breeze. The podspec could specify additional dependencies (both third-party and Core modules) that the demo app's Podfile could specify core modules dependencies alongside the module itself which is treated as a development pod as per standard setup. The backbone of the module, which is the framework itself, encompasses both business logic and UI meaning that both source and asset files are part of it. In this way, the demo apps are very much lightweight and only showcase module features that are implemented in the framework. The following diagram should summarize it all. Design of a module with Podfile and podspec examples Demo Apps Every module comes with a demo app we give particular care to. Demo apps are treated as first-class citizens and the stakeholders are both engineers and product managers. They massively help to showcase the module features - especially those under development - vastly simplify collaboration across Engineering, Product, and Design, and force a good mock-based test-first approach. Following is a SpringBoard page showing our demo apps, very useful to individually showcase all the functionalities implemented over time, some of which might not surface in the final product to all users. Some features are behind experiments, some still in development, while others might have been retired but still present in the modules. Every demo app has a main menu to: access the features force a specific language toggle configuration flags via JustTweak customize mock data We show the example of the Account module demo app on the right. Domain modules demo apps Internal design It's worth noting that our root principle mentioned above does not include any reference to the internal architecture of a module and this is intentional. It's common for iOS teams in the industry to debate on which architecture to adopt across the entire codebase but the truth is that such debate aims to find an answer to a non-existing problem. With an increasing number of modules and engineers, it's fundamentally impossible to align on a single paradigm shared and agreed upon by everyone. Betting on a single architectural design would ultimately let down some engineers who would complain down the road that a different design would have played out better. We decided to stick with the following rule of thumb: Developers are free to use the architectural design they feel would work better for a given problem. This approach brought us to have a variety of different designs - spanning from simple old-school MVC, to a more evolved VIPER - and we constantly learn from each other's code. What's important at the end of the day is that techniques such as inversion of control, dependency injection, and more generally the SOLID principles, are used appropriately to embrace our root principle. Dependency Management We rely heavily on CocoaPods since we adopted it in the early days as it felt like the best and most mature choice at the time we started modularizing our codebase. We think this still holds at the time of writing this article but we can envision a shift to SPM (Swift Package Manager) in 1-2 years time. With a growing number of modules, comes the responsibility of managing the dependencies between them. No panacea can cure dependency hell, but one should adopt some tricks to keep the complexity of the stack under reasonable control. Here's a summary of what worked for us: Always respect semantic versioning; Keep the dependency graph as shallow as possible. From our apps to the leaves of the graph there are no more than 2 levels; Use a minimal amount of shared dependencies. Be aware that every extra level with shared modules brings in higher complexity; Reduce the number of third-party libraries to the bare minimum. Code that's not written and owned by your team is not under your control; Never make modules within a group (domain, core, shared) depend on other modules of the same group; Automate the publishing of new versions. When a pull request gets merged into the master branch, it must also contain a version change in the podspec. Our continuous integration system will automatically validate the podspec, publish it to our private spec repository, and in just a matter of minutes the new version becomes available; Fix the version for dependencies in the Podfile. Whether it is a consumer app or a demo app, we want both our modules and third-party libraries not to be updated unintentionally. It's acceptable to use the optimistic operator for third-party libraries to allow automatic updates of new patch versions; Fix the version for third-party libraries in the modules' podspec. This guarantees that modules' behavior won't change in the event of changes in external libraries. Failing to do so would allow defining different versions in the app's Podfile, potentially causing the module to not function correctly or even to not compile; Do not fix the version for shared modules in the modules' podspec. In this way, we let the apps define the version in the Podfile, which is particularly useful for modules that change often, avoiding the hassle of updating the version of the shared modules in every podspec referencing it. If a new version of a shared module is not backward compatible with the module consuming it, the failure would be reported by the continuous integration system as soon as a new pull request gets raised. A note on the Monorepo approach When it comes to dependency management it would be unfair not to mention the opinable monorepo approach. Monorepos have been discussed quite a lot by the community to pose a remedy to dependency management (de facto ignoring it), some engineers praise them, others are quite contrary. Facebook, Google, and Uber are just some of the big companies known to have adopted this technique, but in hindsight, it's still unclear if it was the best decision for them. In our opinion, monorepos can sometimes be a good choice. For example, in our case, a great benefit a monorepo would give us is the ability to prepare a single pull request for both implementing a code change in a module and integrating it into the apps. This will have an even greater impact when all the Just Eat consumer apps are globalized into a single codebase. Onwards and upwards Modularizing the iOS product has been a long journey and the learnings were immense. All in all, it took more than 3 years, from May 2016 to October 2019, always balancing tech and product improvements. Our natural next step is unifying the apps into a single global project, migrating the international countries over to the UK project to ultimately reach the utopian state of having a single global app. All the modules have been implemented in a fairly abstract way and following a white labeling approach, allowing us to extend support to new countries and onboard acquired companies in the easiest possible way.

Lessons learned from handling JWT on mobile
- iOS
- Authorization
- JWT
- Token
- mobile
Implementing Authorization on mobile can be tricky. Here are some recommendations to avoid common issues.
Originally published on the Just Eat Engineering Blog.
OverviewModern mobile apps are more complicated than they used to be back in the early days and developers have to face a variety of interesting problems.
Implementing Authorization on mobile can be tricky. Here are some recommendations to avoid common issues. Originally published on the Just Eat Engineering Blog. Overview Modern mobile apps are more complicated than they used to be back in the early days and developers have to face a variety of interesting problems. While we've put in our two cents on some of them in previous articles, this one is about authorization and what we have learned by handling JWT on mobile at Just Eat. When it comes to authorization, it's standard practice to rely on OAuth 2.0 and the companion JWT (JSON Web Token). We found this important topic was rarely discussed online while much attention was given to new proposed implementations of network stacks, maybe using recent language features or frameworks such as Combine. We'll illustrate the problems we faced at Just Eat for JWT parsing, usage, and (most importantly) refreshing. You should be able to learn a few things on how to make your app more stable by reducing the chance of unauthorized requests allowing your users to virtually always stay logged in. What is JWT JWT stands for JSON Web Token and is an open industry standard used to represent claims transferred between two parties. A signed JWT is known as a JWS (JSON Web Signature). In fact, a JWT has either to be JWS or JWE (JSON Web Encryption). RFC 7515, RFC 7516, and RFC 7519 describe the various fields and claims in detail. What is relevant for mobile developers is the following: JWT is composed of 3 parts dot-separated: Header, Payload, Signature. The Payload is the only relevant part. The Header identifies which algorithm is used to generate the signature. There are reasons for not verifying the signature client-side making the Signature part irrelevant too. JWT has an expiration date. Expired tokens should be renewed/refreshed. JWT can contain any number of extra information specific to your service. It's common practice to store JWTs in the app keychain. Here is a valid and very short token example, courtesy of jwt.io/ which we recommend using to easily decode tokens for debugging purposes. It shows 3 fragments (base64 encoded) concatenated with a dot. eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJleHAiOjE1Nzc3NTA0MDB9.7hgBhNK_ZpiteB3GtLh07KJ486Vfe3WAdS-XoDksJCQ The only field relevant to this document is exp (Expiration Time), part of Payload (the second fragment). This claim identifies the time after which the JWT must not be accepted. In order to accept a JWT, it's required that the current date/time must be before the expiration time listed in the exp claim. It's accepted practice for implementers to consider for some small leeway, usually no more than a few minutes, to account for clock skew. N.B. Some API calls might demand the user is logged in (user-authenticated calls), and others don't (non-user-authenticated calls). JWT can be used in both cases, marking a distinction between Client JWT and User JWT we will refer to later on. The token refresh problem By far the most significant problem we had in the past was the renewal of the token. This seems to be something taken for granted by the mobile community, but in reality, we found it to be quite a fragile part of the authentication flow. If not done right, it can easily cause your customers to end up being logged out, with the consequent frustration we all have experienced as app users. The Just Eat app makes multiple API calls at startup: it fetches the order history to check for in-flight orders, fetches the most up-to-date consumer details, etc. If the token is expired when the user runs the app, a nasty race condition could cause the same refresh token to be used twice, causing the server to respond with a 401 and subsequently logging the user out on the app. This can also happen during normal execution when multiple API calls are performed very close to each other and the token expires prior to those. It gets trickier if the client and the server clocks are sensibly off sync: while the client might believe to be in possession of a valid token, it has already expired. The following diagram should clarify the scenario. Common misbehavior I couldn't find a company (regardless of size) or indie developer who had implemented a reasonable token refresh mechanism. The common approach seems to be: to refresh the token whenever an API call fails with 401 Unauthorized. This is not only causing an extra call that could be avoided by locally checking if the token has expired, but it also opens the door for the race condition illustrated above. Avoid race conditions when refreshing the token đŠ We'll explain the solution with some technical details and code snippets but what what's more important is that the reader understands the root problem we are solving and why it should be given the proper attention. The more we thought about it, we more we convinced ourselves that the best way to shield ourselves from race conditions is by using threading primitives when scheduling async requests to fetch a valid token. This means that all the calls would be regulated via a filter that would hold off subsequent calls to fire until a valid token is retrieved, either from local storage or, if a refresh is needed, from the remote OAuth server. We'll show examples for iOS, so we've chosen dispatch queues and semaphores (using GCD); fancier and more abstract ways of implementing the solution might exist - in particular by leveraging modern FRP techniques - but ultimately the same primitives are used. For simplicity, let's assume that only user-authenticated API requests need to provide a JWT, commonly put in the Authorization header: Authorization: Bearer <jwt-token> The code below implements the "Get valid JWT" box from the following flowchart. The logic within this section is the one that must be implemented in mutual exclusion, in our solution, by using the combination of a serial queue and a semaphore. Here is just the minimum amount of code (Swift) needed to explain the solution. typealias Token = String typealias AuthorizationValue = String struct UserAuthenticationInfo { let bearerToken: Token // the JWT let refreshToken: Token let expiryDate: Date // computed on creation from 'exp' claim var isValid: Bool { return expiryDate.compare(Date()) == .orderedDescending } } protocol TokenRefreshing { func refreshAccessToken(_ refreshToken: Token, completion: @escaping (Result<UserAuthenticationInfo, Error>) -> Void) } protocol AuthenticationInfoStorage { var userAuthenticationInfo: UserAuthenticationInfo? func persistUserAuthenticationInfo(_ authenticationInfo: UserAuthenticationInfo?) func wipeUserAuthenticationInfo() } class AuthorizationValueProvider { private let authenticationInfoStore: AuthenticationInfoStorage private let tokenRefreshAPI: TokenRefreshing private let queue = DispatchQueue(label: <#label#>, qos: .userInteractive) private let semaphore = DispatchSemaphore(value: 1) init(tokenRefreshAPI: TokenRefreshing, authenticationInfoStore: AuthenticationInfoStorage) { self.tokenRefreshAPI = tokenRefreshAPI self.authenticationInfoStore = authenticationInfoStore } func getValidUserAuthorization(completion: @escaping (Result<AuthorizationValue, Error>) -> Void) { queue.async { self.getValidUserAuthorizationInMutualExclusion(completion: completion) } } } Before performing any user-authenticated request, the network client asks an AuthorizationValueProvider instance to provide a valid user Authorization value (the JWT). It does so via the async method getValidUserAuthorization which uses a serial queue to handle the requests. The chunky part is the getValidUserAuthorizationInMutualExclusion. private func getValidUserAuthorizationInMutualExclusion(completion: @escaping (Result<AuthorizationValue, Error>) -> Void) { semaphore.wait() guard let authenticationInfo = authenticationInfoStore.userAuthenticationInfo else { semaphore.signal() let error = // forge an error for 'missing authorization' completion(.failure(error)) return } if authenticationInfo.isValid { semaphore.signal() completion(.success(authenticationInfo.bearerToken)) return } tokenRefreshAPI.refreshAccessToken(authenticationInfo.refreshToken) { result in switch result { case .success(let authenticationInfo): self.authenticationInfoStore.persistUserAuthenticationInfo(authenticationInfo) self.semaphore.signal() completion(.success(authenticationInfo.bearerToken)) case .failure(let error) where error.isClientError: self.authenticationInfoStore.wipeUserAuthenticationInfo() self.semaphore.signal() completion(.failure(error)) case .failure(let error): self.semaphore.signal() completion(.failure(error)) } } } The method could fire off an async call to refresh the token, and this makes the usage of the semaphore crucial. Without it, the next request to AuthorizationValueProvider would be popped from the queue and executed before the remote refresh completes. The semaphore is initialised with a value of 1, meaning that only one thread can access the critical section at a given time. We make sure to call wait at the beginning of the execution and to call signal only when we have a result and therefore ready to leave the critical section. If the token found in the local store is still valid, we simply return it, otherwise, it's time to request a new one. In the latter case, if all goes well, we persist the token locally and allow the next request to access the method, in the case of an error, we should be careful and wipe the token only if the error is a legit client error (2xx range). This includes also the usage of a refresh token that is not valid anymore, which could happen, for instance, if the user resets the password on another platform/device. It's critical to not delete the token from the local store in the case of any other error, such as 5xx or the common Foundation's NSURLErrorNotConnectedToInternet (-1009), or else the user would unexpectedly be logged out. It's also important to note that the same AuthorizationValueProvider instance must be used by all the calls: using different ones would mean using different queues making the entire solution ineffective. It seemed clear that the network client we developed in-house had to embrace JWT refresh logic at its core so that all the API calls, even new ones that will be added in the future would make use of the same authentication flow. General recommendations Here are a couple more (minor) suggestions we thought are worth sharing since they might save you implementation time or influence the design of your solution. Correctly parse the Payload Another problem - even though quite trivial and that doesn't seem to be discussed much - is the parsing of the JWT, that can fail in some cases. In our case, this was related to the base64 encoding function and "adjusting" the base64 payload to be parsed correctly. In some implementations of base64, the padding character is not needed for decoding, since the number of missing bytes can be calculated but in Foundation's implementation it is mandatory. This caused us some head-scratching and this StackOverflow answer helped us. The solution is - more officially - stated in RFC 7515 - Appendix C and here is the corresponding Swift code: func base64String(_ input: String) -> String { var base64 = input .replacingOccurrences(of: "-", with: "+") .replacingOccurrences(of: "_", with: "/") switch base64.count % 4 { case 2: base64 = base64.appending("==") case 3: base64 = base64.appending("=") default: break } return base64 } The majority of the developers rely on external libraries to ease the parsing of the token, but as we often do, we have implemented our solution from scratch, without relying on a third-party library. Nonetheless, we feel JSONWebToken by Kyle Fuller is a very good one and it seems to implement JWT faithfully to the RFC, clearly including the necessary base64 decode function. Handle multiple JWT for multiple app states As previously stated, when using JWT as an authentication method for non-user- authenticated calls, we need to cater for at least 3 states, shown in the following enum: enum AuthenticationStatus { case notAuthenticated case clientAuthenticated case userAuthenticated } On a fresh install, we can expect to be in the .notAuthenticated state, but as soon as the first API call is ready to be performed, a valid Client JWT has to be fetched and stored locally (at this stage, other authentication mechanisms are used, most likely Basic Auth), moving to the .clientAuthenticated state. Once the user completes the login or signup procedure, a User JWT is retrieved and stored locally (but separately to the Client JWT), entering the .userAuthenticated, so that in the case of a logout we are left with a (hopefully still valid) Client JWT. In this scenario, almost all transitions are possible: A couple of recommendations here: if the user is logged in is important to use the User JWT also for the non-user-authenticated calls as the server may personalise the response (e.g. the list of restaurants in the Just Eat app) store both Client and User JWT, so that if the user logs out, the app is left with the Client JWT ready to be used to perform non-user-authenticated requests, saving an unnecessary call to fetch a new token Conclusion In this article, we've shared some learnings from handling JWT on mobile that are not commonly discussed within the community. As a good practice, it's always best to hide complexity and implementation details. Baking the refresh logic described above within your API client is a great way to avoid developers having to deal with complex logic to provide authorization, and enables all the API calls to undergo the same authentication mechanism. Consumers of an API client, should not have the ability to gather the JWT as itâs not their concern to use it or to fiddle with it. We hope this article helps to raise awareness on how to better handle the usage of JWT on mobile applications, in particular making sure we always do our best to avoid accidental logouts to provide a better user experience.
A Smart Feature Flagging System for iOS
- iOS
- feature flags
- Optimizely
- Just Eat
At Just Eat we have experimentation and feature flagging at our heart and we've developed a component, named JustTweak, to make things easier on iOS.
How the iOS team at Just Eat built a scalable open-source solution to handle local and remote flags. Originally published on the Just Eat Engineering Blog. Overview At Just Eat we have experimentation at our heart, and it is very much dependent on feature flagging/toggling. If we may be so bold, here's an analogy: feature flagging is to experimentation as machine learning is to AI, you cannot have the second without the first one. We've developed an in-house component, named JustTweak, to handle feature flags and experiments on iOS without the hassle. We open-sourced JustTweak on github.com in 2017 and we have been evolving it ever since; in particular, with support for major experimentation platforms such as Optimizely and Firebase Remote Config. JustTweak has been instrumental in evolving the consumer Just Eat app in a fast and controlled manner, as well as to support a large number of integrations and migrations happening under the hood. In this article, we describe the feature flagging architecture and engine, with code samples and integration suggestions. What is feature flagging Feature flagging, in its original form, is a software development technique that provides an alternative to maintaining multiple source-code branches, so that a feature can be tested even before it is completed and ready for release. Feature flags are used in code to show/hide or enable/disable specific features at runtime. The technique also allows developers to release a version of a product that has unfinished features, that can be hidden from the user. Feature toggles also allow shorter software integration cycles and small incremental versions of software to be delivered without the cost of constant branching and merging - needless to say, this is crucial to have on iOS due to the App Store review process not allowing continuous delivery. A boolean flag in code is used to drive what code branch will run, but the concept can easily be extended to non-boolean flags, making them more of configuration flags that drive behavior. As an example, at Just Eat we have been gradually rewriting the whole application over time, swapping and customizing entire modules via configuration flags, allowing gradual switches from old to new features in a way transparent to the user. Throughout this article, the term 'tweaks' is used to refer to feature/configuration flags. A tweak can have a value of different raw types, namely Bool, String, Int, Float, and Double. Boolean tweaks can be used to drive features, like so: let isFeatureXEnabled: Bool = ... if isFeatureXEnabled { // show feature X } else { // don't show feature X } Other types of tweaks are instead useful to customise a given feature. Here is an example of configuring the environment using tweaks: let publicApiHost: String = ... let publicApiPort: Int? = ... let endpoint = Endpoint(scheme: "https", host: publicApiHost, port: publicApiPort, path: "/restaurant/:id/menu") // perform a request using the above endpoint object Problem The crucial part to get right is how and from where the flag values (isFeatureXEnabled, publicApiHost, and publicApiPort in the examples above) are fetched. Every major feature flagging/experimentation platform in the market provides its own way to fetch the values, and sometimes the APIs to do so significantly differ (e.g. Firebase Remote Config Vs Optimizely). Aware of the fact that itâs increasingly difficult to build any kind of non-trivial app without leveraging external dependencies, it's important to bear in mind that external dependencies pose a great threat to the long term stability and viability of any application. Following are some issues related to third-party experimentation solutions: third-party SDKs are not under your control using third-party SDKs in a modular architected app would easily cause dependency hell third-party SDKs are easily abused and various areas of your code will become entangled with them your company might decide to move to a different solution in the future and such switch comes with costs depending on the adopted solution, you might end up tying your app more and more to the platform-specific features that don't find correspondence elsewhere it is very hard to support multiple feature flag providers For the above reasons, it is best to hide third-party SDKs behind some sort of a layer and to implement an orchestration mechanism to allow fetching of flag values from different providers. We'll describe how we've achieved this in JustTweak. A note on the approach When designing software solutions, a clear trait was identified over time in the iOS team, which boils down to the kind of mindset and principle been used: Always strive to find solutions to problems that are scalable and hide complexity as much as possible. One word you would often hear if you were to work in the iOS team is 'Facade', which is a design pattern that serves as a front-facing interface masking more complex underlying or structural code. Facades are all over the place in our code: we try to keep components' interfaces as simple as possible so that other engineers could utilize them with minimal effort without necessarily knowing the implementation details. Furthermore, the more succinct an interface is, the rarer the possibility of misusages would be. We have some open source components embracing this approach, such as JustPersist, JustLog, and JustTrack. JustTweak makes no exception and the code to integrate it successfully in a project is minimal. Sticking to the above principle, the idea behind JustTweak is to have a single entry point to gather flag values, hiding the implementation details regarding which source the flag values are gathered from. JustTweak to the rescue JustTweak provides a simple facade interface interacting with multiple configurations that are queried respecting a certain priority. Configurations wrap specific sources of tweaks, that are then used to drive decisions or configurations in the client code. You can find JustTweak on CocoaPods and it's on version 5.0.0 at the time of writing. We plan to add support for Carthage and Swift Package Manager in the future. A demo app is also available for you to try it out. With JustTweak you can achieve the following: use a JSON local configuration providing default tweak values use a number of remote configuration providers, such as Firebase and Optmizely, to run A/B tests and feature flagging enable, disable, and customize features locally at runtime provide a dedicated UI for customization (this comes particularly handy for features that are under development to showcase the progress to stakeholders) Here is a screenshot of the TweakViewController taken from the demo app. Tweak values changed via this screen are immediately available to your code at runtime. Stack setup The facade class previously mentioned is represented by the TweakManager. There should only be a single instance of the manager, ideally configured at startup, passed around via dependency injection, and kept alive for the whole lifespan of the app. Following is an example of the kind of stack implemented as a static let. static let tweakManager: TweakManager = { // mutable configuration (to override tweaks from other configurations) let userDefaultsConfiguration = UserDefaultsConfiguration(userDefaults: .standard) // remote configurations (optional) let optimizelyConfiguration = OptimizelyConfiguration() let firebaseConfiguration = FirebaseConfiguration() // local JSON configuration (default tweaks) let jsonFileURL = Bundle.main.url(forResource: "Tweaks", withExtension: "json")! let localConfiguration = LocalConfiguration(jsonURL: jsonFileURL) // priority is defined by the order in the configurations array // (from highest to lowest) let configurations: [Configuration] = [userDefaultsConfiguration, optimizelyConfiguration, firebaseConfiguration, localConfiguration] return TweakManager(configurations: configurations) }() ``` JustTweak comes with three configurations out-of-the-box: UserDefaultsConfiguration which is mutable and uses UserDefaults as a key/value store LocalConfiguration which is read-only and uses a JSON configuration file that is meant to be the default configuration EphemeralConfiguration which is simply an instance of NSMutableDictionary Besides, JustTweak defines Configuration and MutableConfiguration protocols you can implement to create your own configurations to fit your needs. In the example project, you can find a few example configurations which you can use as a starting point. You can have any source of flags via wrapping it in a concrete implementation of the above protocols. Since the protocol methods are synchronous, you'll have to make sure that the underlying source has been initialised as soon as possible at startup. All the experimentation platforms provide mechanisms to do so, for example here is how Optimizely does it. The order of the objects in the configurations array defines the configurations' priority. The MutableConfiguration with the highest priority, such as UserDefaultsConfiguration in the example above, will be used to reflect the changes made in the UI (TweakViewController). The LocalConfiguration should have the lowest priority as it provides the default values from a local JSON file. It's also the one used by the TweakViewController to populate the UI. When fetching a tweak, the engine will inspect the chain of configurations in order and pick the tweak from the first configuration having it. The following diagram outlines a possible setup where values present in Optimizely override others in the subsequent configurations. Eventually, if no override is found, the local configuration would return the default tweak baked in the app. Structuring the stack this way brings various advantages: the same engine is used to customise the app for development, production, and test runs consumers only interface with the facade and can ignore the implementation details new code put behind flags can be shipped with confidence since we rely on a tested engine ability to remotely override tweaks de facto allowing to greatly customise the app without the need for a new release TweakManager gets populated with the tweaks listed in the JSON file used as backing store of the LocalConfiguration instance. It is therefore important to list every supported tweak in there so that development builds of the app can allow tweaking the values. Here is an excerpt from the file used in the TweakViewController screenshot above. { "ui_customization": { "display_red_view": { "Title": "Display Red View", "Description": "shows a red view in the main view controller", "Group": "UI Customization", "Value": false }, ... "red_view_alpha_component": { "Title": "Red View Alpha Component", "Description": "defines the alpha level of the red view", "Group": "UI Customization", "Value": 1.0 }, "label_text": { "Title": "Label Text", "Description": "the title of the main label", "Group": "UI Customization", "Value": "Test value" } }, "general": { "greet_on_app_did_become_active": { "Title": "Greet on app launch", "Description": "shows an alert on applicationDidBecomeActive", "Group": "General", "Value": false }, ... } } Testing considerations We've seen that the described architecture allows customization via configurations. We've shown in the above diagram that JustTweak can come handy when used in conjunction with our AutomationTools framework too, which is open-source. An Ephemeral configuration would define the app environment at run-time greatly simplifying the implementation of UI tests, which is well-known to be a tedious activity. Usage The two main features of JustTweak can be accessed from the TweakManager. Checking if a feature is enabled // check for a feature to be enabled let isFeatureXEnabled = tweakManager.isFeatureEnabled("feature_X") if isFeatureXEnabled { // show feature X } else { // hide feature X } Getting and setting the value of a flag for a given feature/variable. JustTweak will return the value from the configuration with the highest priority that provides it, or nil if none of the configurations have that feature/variable. // check for a tweak value let tweak = tweakManager.tweakWith(feature: <#feature_key#>, variable: <#variable_key#>") if let tweak = tweak { // tweak was found in some configuration, use tweak.value } else { // tweak was not found in any configuration } The Configuration and MutableConfiguration protocols define the following methods: func tweakWith(feature: String, variable: String) -> Tweak? func set(_ value: TweakValue, feature: String, variable: String) func deleteValue(feature: String, variable: String) You might wonder why is there a distinction between feature and variable. The reason is that we want to support the Optimizely lingo for features and related variables and therefore the design of JustTweak has to necessarily reflect that. Other experimentation platforms (such as Firebase) have a single parameter key, but we had to harmonise for the most flexible platform we support. Property Wrappers With SE-0258, Swift 5.1 introduces Property Wrappers. If you haven't read about them, we suggest you watch the WWDC 2019 "Modern Swift API Design talk where Property Wrappers are explained starting at 23:11. In short, a property wrapper is a generic data structure that encapsulates read/write access to a property while adding some extra behavior to augment its semantics. Common examples are @AtomicWrite and @UserDefault but more creative usages are up for grabs and we couldn't help but think of how handy it would be to have property wrappers for feature flags, and so we implemented them. @TweakProperty and @OptionalTweakProperty are available to mark properties representing feature flags. Here are a couple of examples, making the code so much nicer than before. @TweakProperty(fallbackValue: <#default_value#>, feature: <#feature_key#>, variable: <#variable_key#>, tweakManager: tweakManager) var isFeatureXEnabled: Bool @TweakProperty(fallbackValue: <#default_value#>, feature: <#feature_key#>, variable: <#variable_key#>, tweakManager: tweakManager) var publicApiHost: String @OptionalTweakProperty(fallbackValue: <#default_value_or_nil#>, feature: <#feature_key#>, variable: <#variable_key#>, tweakManager: tweakManager) var publicApiPort: Int? Mind that by using these property wrappers, a static instance of TweakManager must be available. Update a configuration at runtime JustTweak comes with a ViewController that allows the user to edit the tweaks while running the app. That is achieved by using the MutableConfiguration with the highest priority from the configurations array. This is de facto a debug menu, useful for development and internal builds but not to include in release builds. #if DEBUG func presentTweakViewController() { let tweakViewController = TweakViewController(style: .grouped, tweakManager: tweakManager) // either present it modally or push it on a UINavigationController } #endif Additionally, when a value is modified in any MutableConfiguration, a notification is fired to give the clients the opportunity to react and reflect changes in the UI. override func viewDidLoad() { super.viewDidLoad() NotificationCenter.defaultCenter().addObserver(self, selector: #selector(updateUI), name: TweakConfigurationDidChangeNotification, object: nil) } @objc func updateUI() { // update the UI accordingly } A note on modular architecture It's reasonable to assume that any non-trivial application approaching 2020 is composed of a number of modules and our Just Eat iOS app surely is too. With more than 30 modules developed in-house, it's crucial to find a way to inject flags into the modules but also to avoid every module to depend on an external library such as JustTweak. One way to achieve this would be: define one or more protocols in the module with the set of properties desired structure the modules to allow dependency injection of objects conforming to the above protocol implement logic in the module to consume the injected objects For instance, you could have a class wrapping the manager like so: protocol ModuleASettings { var isFeatureXEnabled: Bool { get } } protocol ModuleBSettings { var publicApiHost: String { get } var publicApiPort: Int? { get } } import JustTweak public class AppConfiguration: ModuleASettings, ModuleBSettings { static let tweakManager: TweakManager = { ... } @TweakProperty(...) var isFeatureXEnabled: Bool @TweakProperty(...) var publicApiHost: String @OptionalTweakProperty(...) var publicApiPort: Int? } Future evolution With recent versions of Swift and especially with 5.1, developers have a large set of powerful new tools, such as generics, associated types, opaque types, type erasure, etc. With Combine and SwiftUI entering the scene, developers are also starting adopting new paradigms to write code. Sensible paths to evolve JustTweak could be to have the Tweak object be generic on TweakValue have TweakManager be an ObservableObject which will enable publishing of events via Combine, and use @EnvironmentObject to ease the dependency injection in the SwiftUI view hierarchy. While such changes will need time to be introduced since our contribution to JustTweak is in-line with the evolution of the Just Eat app (and therefore a gradual adoption of SwiftUI), we can't wait to see them implemented. If you desire to contribute, we are more than happy to receive pull requests. Conclusion In this article, we illustrated how JustTweak can be of great help in adding flexible support to feature flagging. Integrations with external providers/experimentation platforms such as Optimizely, allow remote override of flags without the need of building a new version of the app, while the UI provided by the framework allows local overrides in development builds. We've shown how to integrate JustTweak in a project, how to setup a reasonable stack with a number of configurations and weâve given you some guidance on how to leverage it when writing UI tests. We believe JustTweak to be a great tool with no similar open source alternatives nor proprietary ones and we hope developers will adopt it more and more.
Deep Linking at Scale on iOS
- deep links
- deep linking
- universal links
- iOS
- navigation
- flow controllers
- state machine
- futures
- promises
- Just Eat
How the iOS team at Just Eat built a scalable architecture to support navigation and deep linking.
Originally published on the Just Eat Engineering Blog.
In this article, we propose an architecture to implement a scalable solution to Deep Linking on iOS using an underlying Flow Controller-based architecture, all powered
How the iOS team at Just Eat built a scalable architecture to support navigation and deep linking. Originally published on the Just Eat Engineering Blog. In this article, we propose an architecture to implement a scalable solution to Deep Linking on iOS using an underlying Flow Controller-based architecture, all powered by a state machine and the Futures & Promises paradigm to keep the code more readable. At Just Eat, we use a dedicated component named NavigationEngine that is domain-specific to the Just Eat apps and their use cases. A demo project named NavigationEngineDemo that includes the NavigationEngine architecture (stripped out of many details not necessary to showcase the solution) is available on GitHub. Overview Deep linking is one of the most underestimated problems to solve on mobile. A naĂŻve explanation would say that given some sort of input, mobile apps can load a specific screen, but it only has practical meaning when combined with Universal Links on iOS and App Links on Android. In such cases, the input is a URL that would load a web page on the companion website. Let's use an example from Just Eat: opening the URL https://www.just-eat.co.uk/area/ec4m-london on a web browser would load the list of restaurants in the UK London area for the postcode EC4M. Deep linking to the mobile apps using the same URL should give a similar experience to the user. In reality, the problem is more complex than what it seems at first glance; non-tech people - and sometimes even developers - find it hard to grasp. Loading a web page in a browser is fundamentally different from implementing dedicated logic on mobile to show a UIViewController (iOS) or Activity (Android) to the user and populate it with information that will most likely be gathered from an API call. The logic to perform deep linking starts with parsing the URL, understanding the intent, constructing the user journey, performing the navigation to the target screen passing the info all the way down, and ultimately loading any required data asynchronously from a remote API. On top of all this, it also has to consider the state of the app: the user might have previously left the app in a particular state and dedicated logic would be needed to deep link from the existing to the target screen. A scenario to consider is when the user is not logged in and therefore some sections of the app may not be available. Deep linking can actually be triggered from a variety of sources: Safari web browser any app that allows tapping on a link (iMessage, Notes, etc.) any app that explicitly tries to open the app using custom URL schemes the app itself (to perform jumps between sections) TodayExtension Shortcut items (Home Screen Quick Actions) Spotlight items It should be evident that implementing a comprehensive and scalable solution that fully addresses deep linking is far from being trivial. It shouldn't be an after-thought but rather be baked into the app architecture from the initial app design. It should also be quite glaring what the main problem that needs to be solved first is: the app Navigation. Navigation itself is not a problem with a single solution (if it was, the solution would be provided by Apple/Google and developers would simply stick to it). A number of solutions were proposed over the years trying to make it simpler and generic to some degree - Router, Compass, XCoordinator to name just a few open-source components. I proposed the concept of Flow Controllers in my article Flow Controllers on iOS for a better navigation control back in 2014 when the community had already (I believe) started shifting towards similar approaches. Articles such as Improve your iOS Architecture with FlowControllers (by Krzysztof ZabĆocki), A Better MVC, Part 2: Fixing Encapsulation (by Dave DeLong), Flow Coordinators in iOS (by Dennis Walsh), and even as recently as 2019, Navigation with Flow Controllers (by Majid Jabrayilov) was published. To me, all the proposals share one main common denominator: flow controllers/coordinator and their API are necessarily domain-specific. Consider the following methods taken from one of the articles mentioned above referring to specific use cases: func showLoginViewController() { ... } func showSignupViewController() { ... } func showPasswordViewController() { ... } With the support of colleagues and friends, I tried proposing a generic and abstract solution but ultimately hit a wall. Attempts were proposed using enums to list the supported transitions (as XCoordinator shows in its README for instance) or relying on meta-programming dark magic in Objective-C (which is definitely the sign of a terrible design), neither of which satisfied me in terms of reusability and abstraction. I ultimately realized that it's perfectly normal for such problem to be domain-specific and that we don't necessarily have to find abstract solutions to all problems. Terminology For clarity on some of the terminology used in this article. Deep Linking: the ability to reach specific screens (via a flow) in the app either via a Deep Link or a Universal Link. Deep Link: URI with custom scheme (e.g. just-eat://just-eat.co.uk/login, just-eat-dk://just-eat.co.uk/settings) containing the information to perform deep linking in the app. When it comes to deep links, the host is irrelevant but it's good to keep it as part of the URL since it makes it easier to construct the URL using URLComponents and it keeps things more 'standard'. Universal Link: URI with http/https scheme (e.g. https://just-eat.co.uk/login) containing the information to perform deep linking in the app. Intent: the abstract intent of reaching a specific area of the app. E.g. goToOrderDetails(OrderId). State machine transition: transitions in the state machine allow navigating to a specific area in the app (state) from another one. If the app is in a state where the deep linking to a specific screen should not be allowed, the underlying state machine should not have the corresponding transition. Solution NavigationEngine is the iOS module (pod) used by the teams at Just Eat, that holds the isolated logic for navigation and deep linking. As mentioned above, the magic sauce includes the usage of: FlowControllers to handle the transitions between ViewControllers in a clear and pre-defined way. Stateful state machines to allow transitions according to the current application state. More information on FSM (Finite State Machine) here and on the library at The easiest State Machine in Swift. Promis to keep the code readable using Futures & Promises to help avoiding the Pyramid of doom. Sticking to such a paradigm is also a key aspect for the whole design since every API in the stack is async. More info on the library at The easiest Promises in Swift. a pretty heavy amount of đ§ NavigationEngine maintains separation of concerns between URL Parsing, Navigation, and Deep Linking. Readers can inspect the code in the NavigationEngineDemo project that also includes unit tests with virtually 100% code coverage. Following is an overview of the class diagram of the entire architecture stack. Architecture class diagram While the navigation is powered by a FlowController-based architecture, the deep linking logic is powered by NavigationIntentHandler and NavigationTransitioner (on top of the navigation stack). Note the single entry point named DeepLinkingFacade exposes the following API to cover the various input/sources we mentioned earlier: public func handleURL(_ url: URL) -> Future<Bool> public func openDeepLink(_ deepLink: DeepLink) -> Future<Bool> public func openShortcutItem(_ item: UIApplicationShortcutItem) -> Future<Bool> public func openSpotlightItem(_ userActivity: NSUserActivityProtocol) -> Future<Bool> Here are the sequence diagrams for each one. Refer to the demo project to inspect the code. Navigation As mentioned earlier, the important concept to grasp is that there is simply no single solution to Navigation. I've noticed that such a topic quickly raises discussions and each engineer has different, sometimes strong opinions. It's more important to agree on a working solution that satisfies the given requirements rather than forcing personal preferences. Our NavigationEngine relies on the following navigation rules (based on Flow Controllers): FlowControllers wire up the domain-specific logic for the navigation ViewControllers don't allocate FlowControllers Only FlowControllers, AppDelegate and similar top-level objects can allocate ViewControllers FlowControllers are owned (retained) by the creators FlowControllers can have children FlowControllers and create a parent-child chain and can, therefore, be in a 1-to-many relationship FlowControllers in parent-child relationships communicate via delegation ViewControllers have weak references to FlowControllers ViewControllers are in a 1-to-1 relationship with FlowControllers All the FlowController domain-specific API must be future-based with Future<Bool> as return type Deep linking navigation should occur with no more than one animation (i.e. for long journeys, only the last step should be animated) Deep linking navigation that pops a stack should occur without animation In the demo project, there are a number of *FlowControllerProtocols, each corresponding to a different section/domain of the hosting app. Examples such as RestaurantsFlowControllerProtocol and OrdersFlowControllerProtocol are taken from the Just Eat app and each one has domain specific APIs, e.g: func goToSearchAnimated(postcode: Postcode?, cuisine: Cuisine?, animated: Bool) -> Future<Bool> func goToOrder(orderId: OrderId, animated: Bool) -> Future<Bool> func goToRestaurant(restaurantId: RestaurantId) -> Future<Bool> func goToCheckout(animated: Bool) -> Future<Bool> Note that each one: accepts the animated parameter returns Future<Bool> so that flow sequence can be combined Flow controllers should be combined sensibly to represent the app UI structure. In the case of Just Eat we have a RootFlowController as the root-level flow controller orchestrating the children. A FlowControllerProvider, used by the NavigationTransitioner, is instead the single entry point to access the entire tree of flow controllers. NavigationTransitioner provides an API such as: func goToLogin(animated: Bool) -> Future<Bool> func goFromHomeToSearch(postcode: Postcode?, cuisine: Cuisine?, animated: Bool) -> Future<Bool> This is responsible to keep the underlying state machine and what the app actually shows in sync. Note the goFromHomeToSearch method being verbose on purpose; it takes care of the specific transition from a given state (home). One level up in the stack, NavigationIntentHandler is responsible for combining the actions available from the NavigationTransitioner starting from a given NavigationIntent and creating a complete deep linking journey. It also takes into account the current state of the app. For example, showing the history of the orders should be allowed only if the user is logged in, but it would also be advisable to prompt the user to log in in case he/she is not, and then resume the original action. Allowing so provides a superior user experience rather than simply aborting the flow (it's what websites achieve by using the referring URL). Here is the implementation of the .goToOrderHistory intent in the NavigationIntentHandler: case .goToOrderHistory: switch userStatusProvider.userStatus { case .loggedIn: return navigationTransitioner.goToRoot(animated: false).thenWithResult { _ -> Future<Bool> in self.navigationTransitioner.goToOrderHistory(animated: true) } case .loggedOut: return navigationTransitioner.requestUserToLogin().then { future in switch future.state { case .result: return self.handleIntent(intent) // go recursive default: return Future<Bool>.futureWithResolution(of: future) } } } Since in the design we make the entire API future-based, we can potentially interrupt the deep linking flow to prompt the user for details or simply gather missing information from a remote API. This is crucial and allows us to construct complex flows. By design, all journeys start by resetting the state of the app by calling goToRoot. This vastly reduces the number of possible transitions to take care of as we will describe in more detail in the next section dedicated to the underlying state machine. State Machine As you might have realized by now, the proposed architecture makes use of an underlying Finite State Machine to keep track of the state of the app during a deep linking journey. Here is a simplified version of the state machine configurations used in the Just Eat iOS apps. In the picture, the red arrows are transitions that are available for logged in users only, the blue ones are for logged out users only, while the black ones can always be performed. Note that every state should allow going back to the .allPoppedToRoot state so that, regardless of what the current state of the app is, we can always reset the state and perform a deep linking action starting afresh. This drastically simplifies the graph, avoiding unnecessary transitions such as the one shown in the next picture. Notice that intents (NavigationIntent) are different from transitions (NavigationEngine.StateMachine.EventType). An intent contains the information to perform a deep linking journey, while the event type is the transition from one FSM state to another (or the same). NavigationTransitioner is the class that performs the transitions and applies the companion navigation changes. A navigation step is performed only if the corresponding transition is allowed and completed successfully. If a transition is not allowed, the flow is interrupted, reporting an error in the future. You can showcase a failure in the demo app by trying to follow the Login Universal Link (https://just-eat.co.uk/login) after having faked the login when following the Order History Universal Link (https://just-eat.co.uk/orders). Usage NavigationEngineDemo includes the whole stack that readers can use in client projects. Here are the steps for a generic integration of the code. Add the NavigationEngine stack (NavigationEngineDemo/NavigationEngine folder) to the client project. This can be done by either creating a dedicated pod as we do at Just Eat or by directly including the code. Include Promis and Stateful as dependencies in your Podfile (assuming the usage of Cocoapods). Modify according to your needs, implement classes for all the *FlowControllerProtocols, and connect them to the ViewControllers of the client. This step can be quite tedious depending on the status of your app and we suggest trying to mimic what has been done in the demo app. Add CFBundleTypeRole and CFBundleURLSchemes to the main target Info.plist file to support Deep Links. E.g. <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>je-internal</string> <string>justeat</string> <string>just-eat</string> <string>just-eat-uk</string> </array> </dict> </array> Add the applinks (in the Capabilities -> Associated Domains section of the main target) you'd like to support. This will allow iOS to register the app for Universal Links on the given domains looking for the apple-app-site-association file at the root of those domains once the app is installed. E.g. Implement concrete classes for DeepLinkingSettingsProtocol and UserStatusProviding according to your needs. Again, see the examples in the demo project. The internalDeepLinkSchemes property in DeepLinkSettingsProtocol should contain the same values previously added to CFBundleURLSchemes, while the universalLinkHosts should contain the same applinks: values defined in Capabilities -> Associated Domains. Setup the NavigationEngine stack in the AppDelegate's applicationDidFinishLaunching. To some degree, it should be something similar to the following: var window: UIWindow? var rootFlowController: RootFlowController! var deepLinkingFacade: DeepLinkingFacade! var userStatusProvider = UserStatusProvider() let deepLinkingSettings = DeepLinkingSettings() func applicationDidFinishLaunching(_ application: UIApplication) { // Init UI Stack let window = UIWindow(frame: UIScreen.main.bounds) let tabBarController = TabBarController.instantiate() // Root Flow Controller rootFlowController = RootFlowController(with: tabBarController) tabBarController.flowController = rootFlowController // Deep Linking core let flowControllerProvider = FlowControllerProvider(rootFlowController: rootFlowController) deepLinkingFacade = DeepLinkingFacade(flowControllerProvider: flowControllerProvider, navigationTransitionerDataSource: self, settings: deepLinkingSettings, userStatusProvider: userStatusProvider) // Complete UI Stack window.rootViewController = tabBarController window.makeKeyAndVisible() self.window = window } Modify NavigationTransitionerDataSource according to your needs and implement its methods. You might want to have a separate component and not using the AppDelegate. extension AppDelegate: NavigationTransitionerDataSource { func navigationTransitionerDidRequestUserToLogin() -> Future<Bool> { <#async logic#> } ... } Implement the entry points for handling incoming URLs/inputs in the AppDelegate: func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool { // from internal deep links & TodayExtension deepLinkingFacade.openDeeplink(url).finally { future in <#...#> } return true } func application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([UIUserActivityRestoring]?) -> Void) -> Bool { switch userActivity.activityType { // from Safari case NSUserActivityTypeBrowsingWeb: if let webpageURL = userActivity.webpageURL { self.deepLinkingFacade.handleURL(webpageURL).finally { future in <#...#> } return true } return false // from Spotlight case CSSearchableItemActionType: self.deepLinkingFacade.openSpotlightItem(userActivity).finally { future in let originalInput = userActivity.userInfo![CSSearchableItemActivityIdentifier] as! String <#...#> } return true default: return false } } func application(_ application: UIApplication, performActionFor shortcutItem: UIApplicationShortcutItem, completionHandler: @escaping (Bool) -> Void) { // from shortcut items (Home Screen Quick Actions) deepLinkingFacade.openShortcutItem(shortcutItem).finally { future in let originalInput = shortcutItem.type <#...#> completionHandler(future.hasResult()) } } N.B. Since a number of tasks are usually performed at startup (both from cold and warm starts), it's suggested to schedule them using operation queues. The deep linking task should be one of the last tasks in the queue to make sure that dependencies are previously set up. Here is the great Advanced NSOperations talk by Dave DeLong from WWDC15. The UniversalLinkConverter class should be modified to match the paths in the apple-app-site-association, which should be reachable at the root of the website (the associated domain). It should be noted that if the app is opened instead of the browser, it would be because the Universal Link can be handled; and redirecting the user back to the web would be a fundamental mistake that should be solved by correctly defining the supported paths in the apple-app-site-association file. To perform internal app navigation via deep linking, the DeeplinkFactory class should be used to create DeepLink objects that can be fed into either handleURL(_ url: URL) or openDeepLink(_ deepLink: DeepLink). In-app testing The module exposes a DeepLinkingTesterViewController that can be used to easily test deep linking within an app. Simply define a JSON file containing the Universal Links and Deep Links to test: { "universal_links": [ "https://just-eat.co.uk/", "https://just-eat.co.uk/home", "https://just-eat.co.uk/login", ... ], "deep_links": [ "JUSTEAT://irrelev.ant/home", "justeat://irrelev.ant/login", "just-eat://irrelev.ant/resetPassword?resetToken=xyz", ... ] } Then feed it to the view controller as shown below. Alternatively, use a storyboard reference as shown in the demo app. let deepLinkingTesterViewController = DeepLinkingTesterViewController.instantiate() deepLinkingTesterViewController.delegate = self let path = Bundle.main.path(forResource: "deeplinking_test_list", ofType: "json")! deepLinkingTesterViewController.loadTestLinks(atPath: path) and implement the DeepLinkingTesterViewControllerDelegate extension AppDelegate: DeepLinkingTesterViewControllerDelegate { func deepLinkingTesterViewController(_ deepLinkingTesterViewController: DeepLinkingTesterViewController, didSelect url: URL) { self.deepLinkingFacade.handleURL(universalLink).finally { future in self.handleFuture(future, originalInput: universalLink.absoluteString) } } } Conclusion The solution proposed in this article has proven to be highly scalable and customizable. We shipped it in the Just Eat iOS apps in March 2019 and our teams are gradually increasing the number of Universal Links supported as you can see from our apple-app-site-association. Before implementing and adopting NavigationEngine, supporting new kinds of links was a real hassle. Thanks to this architecture, it is now easy for each team in the company to support new deep link journeys. The declarative approach in defining the API, states, transitions, and intents forces a single way to extend the code which enables a coherent approach throughout the codebase.


Performant, sleek and elegant.
Swift 6 suitable notification observers in iOS
- iOS
- Swift
The author discusses challenges managing side projects, specifically updating SignalPath to Swift 6. They encountered errors related to multiple notification observations but resolved them by shifting to publishers, avoiding sendable closure issues. Although the new approach risks background thread notifications, the compiler is satisfied with the adjustments made to the code.
I have a couple of side projects going on, although it is always a challenge to find time of them. One of them, SignalPath, is what I created back in 2015. Currently, I have been spending some time to bump the Swift version to 6 which brought a quite a list of errors. In many places I had code what dealt with observing multiple notifications, but of course Swift 6 was not happy about it. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let handler: (Notification) -> Void = { [weak self] notification in self?.keyboardInfo = Info(notification: notification) } let names: [Notification.Name] = [ UIResponder.keyboardWillShowNotification, UIResponder.keyboardWillHideNotification, UIResponder.keyboardWillChangeFrameNotification ] observers = names.map({ name -> NSObjectProtocol in return NotificationCenter.default.addObserver(forName: name, object: nil, queue: .main, using: handler) // Converting non-sendable function value to '@Sendable (Notification) -> Void' may introduce data races }) view raw Observer.swift hosted with †by GitHub After moving all of the notification observing to publishers instead, I can ignore the whole sendable closure problem all together. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters Publishers.Merge3( NotificationCenter.default.publisher(for: UIResponder.keyboardWillShowNotification), NotificationCenter.default.publisher(for: UIResponder.keyboardWillHideNotification), NotificationCenter.default.publisher(for: UIResponder.keyboardWillChangeFrameNotification) ) .map(Info.init) .assignWeakly(to: \.keyboardInfo, on: self) .store(in: ¬ificationCancellables) view raw Observer.swift hosted with †by GitHub Great, compiler is happy again although this code could cause trouble if any of the notifications are posted from a background thread. But since this is not a case here, I went for skipping .receive(on: DispatchQueue.main). Assign weakly is a custom operator and the implementation looks like this: This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters public extension Publisher where Self.Failure == Never { func assignWeakly<Root>(to keyPath: ReferenceWritableKeyPath<Root, Self.Output>, on object: Root) -> AnyCancellable where Root: AnyObject { return sink { [weak object] value in object?[keyPath: keyPath] = value } } } view raw Combine+Weak.swift hosted with †by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
AnyClass protocol and Objective-C methods
- iOS
- Swift
- AnyClass
AnyClass is a protocol all classes conform to and it comes with a feature I was not aware of. But first, how to I ended up with using AnyClass. While working on code using CoreData, I needed a way to enumerate all the CoreData entities and call a static function on them. If that function [âŠ]
AnyClass is a protocol all classes conform to and it comes with a feature I was not aware of. But first, how to I ended up with using AnyClass. While working on code using CoreData, I needed a way to enumerate all the CoreData entities and call a static function on them. If that function is defined, it runs an entity specific update. Letâs call the function static func resetState(). It is easy to get the list of entity names of the model and then turn them into AnyClass instances using the NSClassFromString() function. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let entityClasses = managedObjectModel.entities .compactMap(\.name) .compactMap { NSClassFromString($0) } view raw AnyClass.swift hosted with †by GitHub At this point I had an array of AnyClass instances where some of them implemented the resetState function, some didnât. While browsing the AnyClass documentation, I saw this: You can use the AnyClass protocol as the concrete type for an instance of any class. When you do, all known @objcclass methods and properties are available as implicitly unwrapped optional methods and properties, respectively. Never heard about it, probably because I have never really needed to interact with AnyClass in such way. Therefore, If I create an @objc static function then I can call it by unwrapping it with ?. Without unwrapping it safely, it would crash because Department type does not implement the function. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters class Department: NSManagedObject { } class Employee: NSManagedObject { @objc static func resetState() { print("Resetting Employee") } } // This triggers Employee.resetState and prints the message to the console for entityClass in entityClasses { entityClass.resetState?() } view raw AnyClass.swift hosted with †by GitHub It has been awhile since I wrote any Objective-C code, but its features leaking into Swift helped me out here. Reminds me of days filled with respondsToSelector and performSelector. If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
AnyView is everywhere in Xcode 16
- iOS
- Xcode
- Swift
Xcode 16 introduces a new execution engine for Previews, enhancing project configuration support and improving performance by up to 30%. However, it wraps SwiftUI views in AnyView for debug builds, which can hinder optimization. Users can override this behavior with a custom build setting to maintain performance in debugging.
Loved to see this entry in Xcode 16âs release notes: Xcode 16 brings a new execution engine for Previews that supports a larger range of projects and configurations. Now with shared build products between Build and Run and Previews, switching between the two is instant. Performance between edits in the source code is also improved for many projects, with increases up to 30%. It has been difficult at times to use SwiftUI previews when they sometimes just stop working with error messages leaving scratch head. Turns out, it comes with a hidden cost of Xcode 16 wrapping views with AnyView in debug builds which takes away performance. If you donât know it only affects debug builds, one could end up on journey of trying to improve the performance for debug builds and making things worse for release builds. Not sure if this was ever mentioned in any of the WWDC videos, but feels like this kind of change should have been highlighted. As of Xcode 16, every SwiftUI view is wrapped in an AnyView _in debug builds only_. This speeds switching between previews, simulator, and device, but subverts some List optimizations. Add this custom build setting to the project to override the new behavior: `SWIFT_ENABLE_OPAQUE_TYPE_ERASURE=NO` Wrapping in Equatable is likely to make performance worse as it introduces an extra view in the hierarchy for every row. Curt Clifton on Mastodon Fortunately, one can turn off this if this becomes an issue in debug builds. If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Sorting arrays in Swift: multi-criteria
- Foundation
- iOS
- Swift
- localizedCaseInsensitiveCompare
- sort
- sorted(by:)
Swiftâs foundation library provides a sorted(by:) function for sorting arrays. The areInIncreasingOrder closure needs to return true if the closureâs arguments are increasing, false otherwise. How to use the closure for sorting by multiple criteria? Letâs take a look at an example of sorting an array of Player structs. As said before, the closure should [âŠ]
Swiftâs foundation library provides a sorted(by:) function for sorting arrays. The areInIncreasingOrder closure needs to return true if the closureâs arguments are increasing, false otherwise. How to use the closure for sorting by multiple criteria? Letâs take a look at an example of sorting an array of Player structs. Sort by score in descending order Sort by name in ascending order Sort by id in ascending order This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Player { let id: Int let name: String let score: Int } extension Player: CustomDebugStringConvertible { var debugDescription: String { "id=\(id) name=\(name) score=\(score)" } } let players: [Player] = [ Player(id: 0, name: "April", score: 7), Player(id: 1, name: "Nora", score: 8), Player(id: 2, name: "Joe", score: 5), Player(id: 3, name: "Lisa", score: 4), Player(id: 4, name: "Michelle", score: 6), Player(id: 5, name: "Joe", score: 5), Player(id: 6, name: "John", score: 7) ] view raw Sort.swift hosted with †by GitHub As said before, the closure should return true if the left element should be ordered before the right element. If they happen to be equal, we should use the next sorting criteria. For comparing strings, weâll go for case-insensitive sorting using Foundationâs built-in localizedCaseInsensitiveCompare. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let sorted = players.sorted { lhs, rhs in if lhs.score == rhs.score { let nameOrdering = lhs.name.localizedCaseInsensitiveCompare(rhs.name) if nameOrdering == .orderedSame { return lhs.id < rhs.id } else { return nameOrdering == .orderedAscending } } else { return lhs.score > rhs.score } } print(sorted.map(\.debugDescription).joined(separator: "\n")) // id=1 name=Nora score=8 // id=0 name=April score=7 // id=6 name=John score=7 // id=4 name=Michelle score=6 // id=2 name=Joe score=5 // id=5 name=Joe score=5 // id=3 name=Lisa score=4 view raw Sort.swift hosted with †by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
How to keep Dateâs microseconds precision in Swift
- Foundation
- iOS
- Swift
- ISO8601DateFormatter
DateFormatter is used for converting string representation of date and time to a Date type and visa-versa. Something to be aware of is that the conversion loses microseconds precision. This is extremely important if we use these Date values for sorting and therefore ending up with incorrect order. Letâs consider an iOS app which uses [âŠ]
DateFormatter is used for converting string representation of date and time to a Date type and visa-versa. Something to be aware of is that the conversion loses microseconds precision. This is extremely important if we use these Date values for sorting and therefore ending up with incorrect order. Letâs consider an iOS app which uses API for fetching a list of items and each of the item contains a timestamp used for sorting the list. Often, these timestamps have the ISO8601 format like 2024-09-21T10:32:32.113123Z. Foundation framework has a dedicated formatter for parsing these strings: ISO8601DateFormatter. It is simple to use: This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let formatter = ISO8601DateFormatter() formatter.formatOptions = [.withInternetDateTime, .withFractionalSeconds] let date = formatter.date(from: "2024-09-21T10:32:32.113123Z") print(date?.timeIntervalSince1970) // 1726914752.113 view raw ISO8601.swift hosted with †by GitHub Great, but there is on caveat, it ignores microseconds. Fortunately this can be fixed by manually parsing microseconds and adding the missing precision to the converted Date value. Here is an example, how to do this using an extension. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension ISO8601DateFormatter { func microsecondsDate(from dateString: String) -> Date? { guard let millisecondsDate = date(from: dateString) else { return nil } guard let fractionIndex = dateString.lastIndex(of: ".") else { return millisecondsDate } guard let tzIndex = dateString.lastIndex(of: "Z") else { return millisecondsDate } guard let startIndex = dateString.index(fractionIndex, offsetBy: 4, limitedBy: tzIndex) else { return millisecondsDate } // Pad the missing zeros at the end and cut off nanoseconds let microsecondsString = dateString[startIndex..<tzIndex].padding(toLength: 3, withPad: "0", startingAt: 0) guard let microseconds = TimeInterval(microsecondsString) else { return millisecondsDate } return Date(timeIntervalSince1970: millisecondsDate.timeIntervalSince1970 + microseconds / 1_000_000.0) } } view raw ISO8601.swift hosted with †by GitHub That this code does is first converting the string using the original date(from:) method, followed by manually extracting digits for microseconds by handling cases where there are less than 3 digits or event there are nanoseconds present. Lastly a new Date value is created with the microseconds precision. Here are examples of the output (note that floatâs precision comes into play). This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters let dateStrings = [ "2024-09-21T10:32:32.113Z", "2024-09-21T10:32:32.1131Z", "2024-09-21T10:32:32.11312Z", "2024-09-21T10:32:32.113123Z", "2024-09-21T10:32:32.1131234Z", "2024-09-21T10:32:32.11312345Z", "2024-09-21T10:32:32.113123456Z" ] let dates = dateStrings.compactMap(formatter.microsecondsDate(from:)) for (string, date) in zip(dateStrings, dates) { print(string, "->", date.timeIntervalSince1970) } /* 2024-09-21T10:32:32.113Z -> 1726914752.113 2024-09-21T10:32:32.1131Z -> 1726914752.1130998 2024-09-21T10:32:32.11312Z -> 1726914752.1131198 2024-09-21T10:32:32.113123Z -> 1726914752.113123 2024-09-21T10:32:32.1131234Z -> 1726914752.113123 2024-09-21T10:32:32.11312345Z -> 1726914752.113123 2024-09-21T10:32:32.113123456Z -> 1726914752.113123 */ view raw ISO8601.swift hosted with †by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Wrapping async-await with a completion handler in Swift
- Swift
- async
- iOS
It is not often when we need to wrap an async function with a completion handler. Typically, the reverse is what happens. This need can happen in codebases where the public interface canât change just right now, but internally it is moving towards async-await functions. Letâs jump in and see how to wrap an async [âŠ]
It is not often when we need to wrap an async function with a completion handler. Typically, the reverse is what happens. This need can happen in codebases where the public interface canât change just right now, but internally it is moving towards async-await functions. Letâs jump in and see how to wrap an async function, an async throwing function and an async throwing function what returns a value. To illustrate how to use it, weâll see an example of how a PhotoEffectApplier type has a public interface consisting of completion handler based functions and how it internally uses PhotoProcessor type what only has async functions. The end result looks like this: This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct PhotoProcessor { func process(_ photo: Photo) async throws -> Photo { // ⊠return Photo(name: UUID().uuidString) } func setConfiguration(_ configuration: Configuration) async throws { // ⊠} func cancel() async { // ⊠} } public final class PhotoEffectApplier { private let processor = PhotoProcessor() public func apply(effect: PhotoEffect, to photo: Photo, completion: @escaping (Result<Photo, Error>) -> Void) { Task(operation: { try await self.processor.process(photo) }, completion: completion) } public func setConfiguration(_ configuration: Configuration, completion: @escaping (Error?) -> Void) { Task(operation: { try await self.processor.setConfiguration(configuration) }, completion: completion) } public func cancel(completion: @escaping (Error?) -> Void) { Task(operation: { await self.processor.cancel() }, completion: completion) } } view raw PhotoEffectApplier.swift hosted with †by GitHub In this example, we have all the interested function types covered: async, async throwing and async throwing with a return type. Great, but letâs have a look at these Task initializers what make this happen. The core idea is to create a Task, run an operation, and then make a completion handler callback. Since most of the time we need to run the completion on the main thread, then we have a queue argument with the default queue set to the main thread. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension Task { @discardableResult init<T>( priority: TaskPriority? = nil, operation: @escaping () async throws -> T, queue: DispatchQueue = .main, completion: @escaping (Result<T, Failure>) -> Void ) where Success == Void, Failure == any Error { self.init(priority: priority) { do { let value = try await operation() queue.async { completion(.success(value)) } } catch { queue.async { completion(.failure(error)) } } } } } view raw AsyncThrowsValue.swift hosted with †by GitHub This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension Task { @discardableResult init( priority: TaskPriority? = nil, operation: @escaping () async throws -> Void, queue: DispatchQueue = .main, completion: @escaping (Error?) -> Void ) where Success == Void, Failure == any Error { self.init(priority: priority) { do { try await operation() queue.async { completion(nil) } } catch { queue.async { completion(error) } } } } } view raw AsyncThrows.swift hosted with †by GitHub This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters extension Task { @discardableResult init( priority: TaskPriority? = nil, operation: @escaping () async -> Void, queue: DispatchQueue = .main, completion: @escaping () -> Void ) where Success == Void, Failure == Never { self.init(priority: priority) { await operation() queue.async { completion() } } } } view raw Async.swift hosted with †by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Dark Augmented Code theme for Xcode
- Swift
- Xcode
After a couple of years, I tend to get tired of looking at the same colour scheme in Xcode. Then I spend quite a bit of time looking for a new theme and then coming back with empty hands. Material default has served me for a while, but it never felt like a perfect colour [âŠ]
After a couple of years, I tend to get tired of looking at the same colour scheme in Xcode. Then I spend quite a bit of time looking for a new theme and then coming back with empty hands. Material default has served me for a while, but it never felt like a perfect colour scheme for me. Therefore, I decided to take on a road of creating a new colour scheme on my own which is going to be named as âAugmented Code (Dark)â. It is available for Xcode and iTerm 2. Download it from here: GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee


Cancellable withObservationTracking in Swift
- iOS
- Swift
- SwiftUI
- observation
- withObservationTracking
Observation framework came out along with iOS 17 in 2023. Using this framework, we can make objects observable very easily. Please refer to @Observable macro in SwiftUI for quick recap if needed. It also has a function withObservationTracking(_:onChange:) what can be used for cases where we would want to manually get a callback when a tracked [âŠ]
Observation framework came out along with iOS 17 in 2023. Using this framework, we can make objects observable very easily. Please refer to @Observable macro in SwiftUI for quick recap if needed. It also has a function withObservationTracking(_:onChange:) what can be used for cases where we would want to manually get a callback when a tracked property is about to change. This function works as a one shot function and the onChange closure is called only once. Note that it is called before the value has actually changed. If we want to get the changed value, we would need to read the value on the next run loop cycle. It would be much more useful if we could use this function in a way where we could have an observation token and as long as it is set, the observation is active. Here is the function with cancellation support. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters func withObservationTracking( _ apply: @escaping () -> Void, token: @escaping () -> String?, willChange: (@Sendable () -> Void)? = nil, didChange: @escaping @Sendable () -> Void ) { withObservationTracking(apply) { guard token() != nil else { return } willChange?() RunLoop.current.perform { didChange() withObservationTracking( apply, token: token, willChange: willChange, didChange: didChange ) } } } view raw Observation.swift hosted with †by GitHub The apply closure drives which values are being tracked, and this is passed into the existing withObservationTracking(_:onChange:) function. The token closure controls if the change should be handled and if we need to continue tracking. Will and did change are closures called before and after the value has changed. Here is a simple example where we have a view which controls if the observation should be active or not. Changing the value in the view model only triggers the print lines when observation token is set. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct ContentView: View { @State private var viewModel = ViewModel() @State private var observationToken: String? var body: some View { VStack { Text(viewModel.title) Button("Add") { viewModel.add() } Button("Start Observing") { guard observationToken == nil else { return } observationToken = UUID().uuidString observeAndPrint() } Button("Stop Observing") { observationToken = nil } } .padding() } func observeAndPrint() { withObservationTracking({ _ = viewModel.title }, token: { observationToken }, willChange: { [weak viewModel] in guard let viewModel else { return } print("will change \(viewModel.title)") }, didChange: { [weak viewModel] in guard let viewModel else { return } print("did change \(viewModel.title)") }) } } @Observable final class ViewModel { var counter = 0 func add() { counter += 1 } var title: String { "Number of items: \(counter)" } } view raw ContentView.swift hosted with †by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Referencing itself in a struct in Swift
- Foundation
- iOS
- Swift
It took a long time, I mean years, but it finally happened. I stumbled on a struct which had a property of the same type. At first, it is kind of interesting that the replies property compiles fine, although it is a collection of the same type. I guess it is so because arrayâs storage [âŠ]
It took a long time, I mean years, but it finally happened. I stumbled on a struct which had a property of the same type. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int // This is OK: let replies: [Message] // This is not OK // Value type 'Message' cannot have a stored property that recursively contains it let parent: Message? } view raw Struct.swift hosted with †by GitHub At first, it is kind of interesting that the replies property compiles fine, although it is a collection of the same type. I guess it is so because arrayâs storage type is a reference type. The simplest workaround is to use a closure for capturing the actual value. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int let replies: [Message] private let parentClosure: () -> Message? var parent: Message? { parentClosure() } } view raw Struct2.swift hosted with †by GitHub Or we could go for using a boxed wrapper type. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int let replies: [Message] private let parentBoxed: Boxed<Message>? var parent: Message? { parentBoxed?.value} } class Boxed<T> { let value: T init(value: T) { self.value = value } } view raw Struct3.swift hosted with †by GitHub Or if we prefer property wrappers, using that instead. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct Message { let id: Int let replies: [Message] @Boxed var parent: Message? } @propertyWrapper class Boxed<Value> { var value: Value init(wrappedValue: Value) { value = wrappedValue } var wrappedValue: Value { get { value } set { value = newValue } } } view raw Struct4.swift hosted with †by GitHub Then there are also options like changing the struct into class instead, but that is something to consider. Or finally, creating a All in all, it is fascinating how something simple like this actually has a pretty complex background. If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
ScrollView phase changes on iOS 18
- Swift
- SwiftUI
- iOS
- onScrollPhaseChange
- ScrollGeometry
- ScrollPhase
- ScrollPhaseChangeContext
- ScrollView
In addition to scroll related view modifiers covered in the previous blog post, there is another one for detecting scroll view phases aka the state of the scrolling. The new view modifier is called onScrollPhaseChange(_:) and has three arguments in the change closure: old phase, new phase and a context. ScrollPhase is an enum with [âŠ]
In addition to scroll related view modifiers covered in the previous blog post, there is another one for detecting scroll view phases aka the state of the scrolling. The new view modifier is called onScrollPhaseChange(_:) and has three arguments in the change closure: old phase, new phase and a context. ScrollPhase is an enum with the following values: animating â animating the content offset decelerating â user interaction stopped and scroll velocity is decelerating idle â no scrolling interacting â user is interacting tracking â potential user initiated scroll event is going to happen The enum has a convenience property of isScrolling which is true when the phase is not idle. ScrollPhaseChangeContext captures additional information about the scroll state, and it is the third argument of the closure. The type gives access to the current ScrollGeometry and the velocity of the scroll view. Here is an example of a scroll view which has the new view modifier attached. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters struct ContentView: View { @State private var scrollState: ( phase: ScrollPhase, context: ScrollPhaseChangeContext )? let data = (0..<100).map({ "Item \($0)" }) var body: some View { NavigationStack { ScrollView { ForEach(data, id: \.self) { item in Text(item) .frame(maxWidth: .infinity) .padding() .background { RoundedRectangle(cornerRadius: 8) .fill(Color.cyan) } .padding(.horizontal, 8) } } .onScrollPhaseChange { oldPhase, newPhase, context in scrollState = (newPhase, context) } Divider() VStack { Text(scrollStateDescription) } .font(.footnote.monospaced()) .padding() } } private var scrollStateDescription: String { guard let scrollState else { return "" } let velocity: String = { guard let velocity = scrollState.context.velocity else { return "none" } return "\(velocity)" }() let geometry = scrollState.context.geometry return """ State at the scroll phase change Scrolling=\(scrollState.phase.isScrolling) Phase=\(scrollState.phase) Velocity \(velocity) Content offset \(geometry.contentOffset) Visible rect \(geometry.visibleRect.integral) """ } } view raw ScrollPhase.swift hosted with †by GitHub If this was helpful, please let me know on Mastodon@toomasvahter or Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading. Support me on Patreon Donate with Paypal Buy me a coffee
Recent content on Benoit Pasquier
From Engineer to Manager: A Year of Growth and Transformation
It feels like it was yesterday when I became an engineering manager but it has been almost a year. I want to take this time to reflect on the challenges and learnings from this journey.
Things to know before becoming an Engineering Manager
The journey from individual contributor to engineering manager isnât always straightforward. Today, Iâll share what it means to become an engineering manager from my point of view, and a few important points to be aware of before making this transition.
Transitioning to an Engineering Manager role
Itâs been a while since I havenât posted anything on my website, itâs because there have been a few changes in 2022 that kept me away from writing. Itâs time to resume it.
Security Application Static Analysis applied to iOS and Gitlab CI
Security is a big topic in software engineering but how does it apply to mobile development? We care about user experience or mobile performance, security issues are rarely prioritized. This week, Iâll share how to integrate security tools into your CI pipeline to stay aware of your codebase health.
Being more efficient as a mobile engineer
I was reading this week about â10x engineerâ and what it means in the tech industry. If the title can be questionable, I wanted to reflect on its definition and what it can mean in mobile engineering.
When to remove your iOS app from the App Store
For most mobile engineers, the end game is to release our own apps. For the few projects that make it to the App Store, it can be pretty hard to keep them alive over time. Eventually, the question comes up: should I remove my app from the App Store? Today, Iâll share about the thought process that makes me sunset one.
Weak self, a story about memory management and closure in Swift
Memory management is a big topic in Swift and iOS development. If there are plenty of tutorials explaining when to use weak self
with closure, here is a short story when memory leaks can still happen with it.
Setting up Auto Layout constraints programmatically in Swift
In iOS development, content alignment and spacing is something that can take a lot of our time. Today, letâs explore how to set constraint with UIKit, update them and resolve constraint conflicts.
Ten years of blogging, one article at a time
Most of people donât know but Iâve been blogging for some time now. Actually, tomorrow will be ten years. Today is a good time to take a walk on memory lane.
Deep linking and URL scheme in iOS
Opening an app from an URL is such a powerful iOS feature. Its drives users to your app, and can create shortcuts to specific features. This week, weâll dive into deep linking on iOS and how to create an URL scheme for your app.
Tips and tweaks to integrate Github Action to your iOS project
Iâve been exploring more and more tooling around iOS ecosystem. One tool I really enjoy using those days is Github Action as a continuous integration for my projects. Today weâll dive into tips and tweaks to make the most of it.
Flutter and fastlane, how to setup an iOS continuous delivery solution
When it comes to iOS development, everybody have their own favorite language and framework: Swift, Objective-C, SwiftUI, React-Native, Flutter and so on. Unlike most of my previous post, today weâre going to leverage some iOS tooling for cross platforms technology: fastlane and Flutter.
Currency TextField in SwiftUI
Between banking and crypto apps, itâs quite often we interact with currency inputs on daily basis. If creating a localized UITextField
can already be tricky in UIKit, I was wondering how hard it would be to do a similar one in SwiftUI. Letâs see today how to create a localized currency TextField
in SwiftUI.
Open Source checklist for your next Swift library
Like many developers, I use open source tools on daily basis. Recently, Iâve got the chance to create one for other teammates and try to think about what I should consider before launching it. Today I share this checklist.
Unit testing UIView action and gesture in Swift
A big part of the developer journey is make sure our code behaves as expected. Itâs best practice to setup tests that allow us to test quickly and often that nothing is broken. If unit testing is common practice to check the business logic, we can also extend it to cover some specific UI behaviors. Letâs how to unit test views and gesture in UIKit.
Dependency injection and Generics to create a modular app in Swift
When we talk about modular app, we rarely mention how complex it can be over time and get out of hand. In most cases, importing frameworks into one another is a reasonable solution but we can do more. Letâs explore how with dependency inversion in Swift and how to create order into our components.
Things I wish I knew in my early coding career
For the past few years, I had the opportunity to mentor new joiners through different roles. In some aspects, I could see myself in them the same way I started years back: eager to prove themselves, jumping on the code and hacking around.
I tried to think about what I learnt the hard way since my first role in the tech industry and how could I help them learn the easy way.
Create a web browser with WebKit and SwiftUI
Recently, Iâve been more and more curious about web experience through mobile apps. Most of web browser apps look alike, I was wondering how could I recreate one with WebKit and SwiftUI. Letâs dive in.
Migrating an iOS app to SwiftUI - Database with Realm
To move an existing iOS app codebase to SwiftUI can quickly become a challenge if we donât scope the difficulties ahead. After covering the navigation and design layer last week, itâs time to dive deeper into the logic and handle the code migration for a database and the user preferences.
Migrating an iOS app to SwiftUI - Navigation & Storyboards
If SwiftUI is great for many things, migrating completely an existing app codebase to it can be really tricky. In a series of blog posts, Iâll share how to migrate an iOS app written in Swift with UIKit to SwiftUI. Today, letâs start with the navigation and the UI components with storyboards.
Creating a webcam utility app for macOS in SwiftUI
Did you ever have to share your screen and camera together? I recently did and it was that easy. How hard could it be to create our own? Today, weâll code our own webcam utility app for macOS in SwiftUI.
Migrating MVVM architecture from RxSwift to Combine
Itâs been almost two years that Combine has been introduced to the Apple developer community. As many developer, you want to migrate your codebase to it. You donât want to be left behind but youâre not sure where to start, maybe not sure if you want to jump to SwiftUI either. Nothing to worry, letâs see step by step how to migrate an iOS sample app using UIKit and RxSwift to Combine.
How to display date and time in SwiftUI
Displaying dates or times is a very common requirement for many apps, often using a specific date formatter. Letâs see what SwiftUI brings to the table to make it easier for developers.
Create a dynamic onboarding UI in Swift
When creating new features, itâs really important to think about how our users will use it. Most of the time, the UI is straightforward enough. However, sometimes, you will want to give some guidance, to highlight a button or a switch, with a message attached. Today, weâll create a reusable and adaptable overlay in Swift to help onboard mobile users for any of your features.
Goodbye 2020 - A year in perspective
Close to the end of the year, I tend to list what Iâve accomplished but also what didnât go so well, to help me see what can I do better next year. With couple days early, itâs time to look back at 2020.
How to pass data between views using Coordinator pattern in Swift
A question that comes back often when using Coordinator pattern in iOS development is how to pass data between views. Today Iâll share different approaches for a same solution, regardless if you are using MVVM, MVC or other architectural design pattern.
Automating App Store localized screenshots with XCTest and Xcode Test Plan
One reason I like so much working on native mobile apps is to deliver the user experience based on their region and location. Although, for every update, it can be painful for developers to recapture screenshots foreach available language. Today, Iâll share how to automate this with UI tests and Xcode tools.
Playing Video with AVPlayer in SwiftUI
Iâve been experiencing more and more with SwiftUI and I really wanted to see what we can do with video content. Today Iâll share my findings, showing how to play video using AVFoundation
in SwiftUI, including some mistakes to avoid.
With Catalyst and SwiftUI multi-platform, should you create a macOS version of your app?
With Mac Catalyst and SwiftUI support for macOS, Apple has been pushing new tools to the community for the past couple years to create new services on Mac computers. Does it mean you should do too? Here are couple things to consider first.
Create a watchOS app in SwiftUI
Designing a watchOS app in Swift always felt to be quite tricky. I could spend hours tweaking redoing layout and constraints. With SwiftUI supporting watchOS, I wanted to have a new try at it, releasing a standalone app for Apple Watch.
As software engineer, how to face the impostor syndrome?
Shortly stepping back from coding for a week and reading about the community, I realized it how easy it is to be crushed by anxiety: I see so many great things happening every day, things I want to be part of, but at the same time getting anxiety to be good enough. This is my thoughts of how to face the impostor syndrome.
Advanced testing tips in Xcode
In the last couple years, Apple has made some good efforts to improve their testing tools. Today, Iâll walk you through some tips to make sure your test suite run at their best capacity.
Atomic properties and Thread-safe data structure in Swift
A recurring challenge in programming is accessing a shared resource concurrently. How to make sure the code doesnât behave differently when multiple thread or operations tries to access the same property. In short, how to protect from a race condition?
Deploying your Swift code on AWS Lambda
About a month ago, it became possible to run Swift code on AWS Lambda. I was really interesting to try and see how easy it would be to deploy small Swift functions as serverless application. Letâs see how.
Introduction to MVVM pattern in Objective-C
Even though the iOS ecosystem is growing further every day from Objective-C, some companies still heavily rely on it. A week away for another wave of innovation from WWDC 2020, I thought it would be interesting to dive back into Objective-C starting with a MVVM pattern implementation.
100 day challenge of data structure and algorithm in Swift
Since January, Iâve been slowing down blogging for couple reasons: I started doubting about myself and the quality of my content but I also wanted to focus more on some fundamentals I felt I was missing. So I committed to a â100 day challengeâ coding challenge, focused on data structure and algorithm in Swift.
Data Structure - Implementing a Tree in Swift
Following up previous articles about common data structure in Swift, this week itâs time to cover the Tree, a very important concept that we use everyday in iOS development. Letâs dive in.
Using Key-Value Observing in Swift to debug your app
Recently, I was looking into a bug where the UITabBar was inconsistently disappearing on specific pages. I tried different approaches but I couldnât get where it got displayed and hidden. Thatâs where I thought about KVO.
Data Structure - Coding a Stack in Swift
After covering last week how to code a Queue in Swift, it sounds natural to move on to the Stack, another really handy data structure which also find his place in iOS development. Letâs see why.
Data Structure - How to implement a Queue in Swift
Recently revisiting computer science fundamentals, I was interested to see how specific data structure applies to iOS development, starting this week one of most common data structure: the queue.
Should I quit blogging?
When I started this blog in 2012, it was at first to share solution to technical problem I encountered on my daily work, to give back to the community. Over the years, I extended the content to other projects and ideas I had. Nowadays, I get more and more feedbacks on it, sometimes good, sometimes bad, either way something always good to learn from.
Start your A/B testing journey with SwiftUI
Last year, I shared a solution to tackle A/B testing on iOS in swift. Now that we have SwiftUI, I want to see if there is a better way to implement A/B testing. Starting from the same idea, Iâll share different implementations to find the best one.
How to make your iOS app smarter with sentiment analysis
For quite some time now, Iâve been developing an interest to data analysis to find new ways to improve mobile app. Iâve recently found some time to experiment neural language processing for a very specific usecase related to my daily work, sentiment analysis of customer reviews on fashion items.
Localization with SwiftUI, how to preview your localized content
With SwiftUI being recently introduced, I was curious if we could take advantage of SwiftUI preview to speed up testing localization and make sure your app looks great for any language.
SwiftUI - What has changed in your MVVM pattern implementation
Introduced in 2019, Apple made UI implementation much simpler with With SwiftUI its UI declarative framework. After some time experiencing with it, Iâm wondering today if MVVM is still the best pattern to use with. Letâs see what has changed, implementing MVVM with SwiftUI.
Data Structure and Algorithm applied to iOS
When asked about data structure and algorithm for an iOS development role, there is always this idea that itâs not a knowledge needed. Swift got already native data structure, right? Isnât the rest only UI components? Thatâs definitely not true. Letâs step back and discuss about data structure and algorithm applied to iOS development.
How to integrate Redux in your MVVM architecture
For last couple years, Iâve been experimenting different architectures to understand pros and cons of each one of them. Redux architecture is definitely one that peek my curiosity. In this new post, Iâll share my finding pairing Redux with MVVM, another pattern Iâm familiar with and more importantly why you probably shouldnât pair them.
Software engineer, it's okay to not have a side project
There is a believe that any software developer must contribute or have a side project to work on. Even if itâs great to have, I think there is something bigger at stake doing that.
How to build a modular architecture in iOS
Over time, any code base grows along with the project evolves and matures. It creates two main constraints for developers: how to have a code well organized while keeping a build time as low as possible. Letâs see how a modular architecture can fix that.
Analytics - How to avoid common mistakes in iOS
I have been interested in analytics tools for a while, especially when itâs applied to mobile development. Over the time, I saw many code mistakes when implementing an analytical solution. Some of them can be easily avoided when developer got the right insights, letâs see how.
Apps and Projects
Over the time, I spent quite some time building different apps and projects. Here is the list of the one that became something. Lighthouse is a webapp written in Swift to test universal link configuration. Driiing, a running companion app to signal runners coming to pedestrians. Appy, an iOS app that takes helps you quit your bad habit. Square is an resizing tool for app icons written in Rust. Japan Direct, an itinerary app for iOS to visit Japan like a local.
Events and Talks
I recently tried to be more active in the iOS community. Becoming speaker and talks to events is my next challenged. Here is the list of talks Iâve made so far. My very first one was recently at iOS meetup Singapore in July 2019, talking about scalability of an iOS app along with your team. You can read more about this whole new journey here. I also got chance to be part of iOS Conf SG 2021, an online version of the very popular international event iOS Conf SG.
Code Coverage in Xcode - How to avoid a vanity metric for your iOS app
Since Xcode 7, iOS developers can generate a code coverage for their app: a report showing which area of their app is covered by unit tests. However, this is isnât always accurate, letâs see why you should not base your code health only on code coverage.
Appy, an iOS app to help you quit your bad habits
It has been a while since I wanted to create something helpful to others, not than just another random app. Then I found out there were not so many great sobriety apps, so I launched one. Here is Appy, to help you quit your bad habits.
How to integrate Sign In with Apple in your iOS app
With iOS13, Apple is introducing âSign In with Appleâ, an authentication system that allows user create an account for your app based on their Apple ID. Letâs see how to integrate it in your app and be ready for iOS13 launch.
How to avoid common mistakes for your first iOS talk
I have been a bit more quite for the past couple weeks to take a break of my weekly routine of blogging. Itâs not because I was lazy, but I wanted to take time to digest WWDC. At the same time I had other running projects, one was my first talk at an iOS meetup. Here is couple tips I would have love to hear earlier.
First steps in functional reactive programming in Swift with Apple Combine framework
One debate over the past year in the iOS ecosystem was the around functional reactive framework like RxSwift or ReactiveCocoa. This year at WWDC2019, Apple took position on it and released their own functional reactive programming framework, here is Combine.
iOS Code Review - Health check of your Swift code
I have been recently asked to review an iOS application to see how healthy was the code base, if it follows the best practices and how easy it would be to add new features to it. If I review some code on daily basis for small pull requests, analyzing one whole app at once is quite different exercise. Here is some guidelines to help doing that analysis.
How to implement Coordinator pattern with RxSwift
After weeks experimenting different patterns and code structures, I wanted to go further in functional reactive programming and see how to take advantage of it while following Coordinator pattern. This post describes how integrate RxSwift with Coordinator pattern and which mistakes to avoid.
ReSwift - Introduction to Redux architecture in Swift
If you are not familiar with it, Redux a Javascript open source library designed to manage web application states. It helps a lot to make sure your app always behaves as expected and makes your code easier to test. ReSwift is the same concept but in Swift. Letâs see how.
Tools and tips to scale your iOS project along with your team
We often talk about scalability of iOS app but not much about the project itself or the team. How to prepare your project to move from 2 developers to 6? How about 10 or 20 more? In that research, Iâve listed different tools to prepare your team and project to scale.
RxSwift & MVVM - Advanced concepts of UITableView with RxDataSources
For the past months, I keep going further in RxSwift usage. I really like the idea of forwarding events through different layers but the user interface stays sometimes a challenge. Today, Iâll describe how to use RxDataSources to keep things as easy as possible.
How to use Vapor Server to write stronger UI tests in Swift
Even if I usually stay focus on the customer facing side of mobile development, I like the idea of writing backend api with all the security that Swift includes. Starting small, why not using Swift Server for our UI Tests to mock content and be at the closest of the real app.
How to bootstrap your iOS app to iterate faster
I love developing new iOS apps and create new products. However, regardless of the project, it often need a team to mix the required skills: design, coding, marketing. Although, this less and less true, so letâs see how to bootstrap your iOS app.
RxSwift & MVVM - How to use RxTests to test your ViewModel
Not that long ago, I wrote how to pair RxSwift with MVVM architecture in an iOS project. Even if I refactored my code to be reactive, I omitted to mention the unit tests. Today Iâll show step by step how to use RxTest to unit test your code.
Down the rabbit hole of iOS design patterns
For years now, the whole iOS community has written content about the best way to improve or replace the Apple MVC we all started with, myself included. MVC, MVVM, MVP, VIPER? Regardless the type of snake you have chosen, itâs time to reflect on that journey.
Coordinator & MVVM - Clean Navigation and Back Button in Swift
After introducing how to implement Coordinator pattern with an MVVM structure, it feels natural for me to go further and cover some of the blank spots of Coordinator and how to fix along the way.
Reversi - An elegant A/B testing framework for iOS in Swift.
Couple weeks ago, I heard somebody talking about A/B testing in iOS and how âmobile native A/B testing is hard to implementâ. It didnât sound right to me. So I build a tiny framework for that in Swift. Here is Reversi.
Dos and Donâts for creating an onboarding journey on iOS
I was recently searching for onboarding journey in iOS, that succession of screens displayed at the first launch of a freshly installed mobile app. But regardless how beautiful the design can be, why so many people are tempted to skip it. I listed things to consider while creating an onboarding journey for your iOS app.
Introduction to Coordinator pattern in Swift
After some times creating different iOS apps following an MVVM pattern, Iâm often not sure how to implement the navigation. If the View handles the rendering and userâs interactions and the ViewModel the service or business logic, where does the navigation sit? Thatâs where Coordinator pattern takes place.
How to create a customer focused mobile app
Last year, I launched with a friend Japan Direct, an itinerary app for Japan travellers. Even if the first version came up quite quickly, I kept iterate but always staying focus on customer feedback first. Almost a year later, itâs good time for synthesis, see what worked and how we created a customer focused app.
Adaptive Layout and UICollectionView in Swift
Apple introduced in iOS8 trait variations that let developers create more adaptive design for their mobile apps, reducing code complexity and avoiding duplicated code between devices. But how to take advantage of variations for UICollectionView?
This post will cover how to setup variations via Interface Builder as well but also programatically, using AutoLayout and UITraitVariation with a UICollectionView to create a unique adaptive design.
RxSwift & MVVM - An alternative structure for your ViewModel
For last couple weeks, Iâve worked a lot about how to integrate RxSwift into an iOS project but I wasnât fully satisfied with the view model. After reading many documentation and trying on my side, Iâve finally found a structure Iâm happy with.
Create a machine learning model to classify Fashion images in Swift
Since WWDC18, Apple made it way easier to developers to create model for machine learning to integrate iOS apps. I have tried myself in the past different models, one for face detection and create another with Tensorflow to fashion classification during a hackathon. Today Iâll share with you how I create a model dedicated to fashion brands.
How to integrate RxSwift in your MVVM architecture
It took me quite some time to get into Reactive Programming and its variant adapted for iOS development with RxSwift and RxCocoa. However, being fan of MVVM architecture and using an observer design pattern with it, it was natural for me to revisit my approach and use RxSwift instead. Thats what Iâm going to cover in this post.
Design pattern in Swift - Delegation
The delegation pattern is one of the most common design pattern in iOS. You probably use it on daily basis without noticing, every time you create a UITableView or UICollectionView and implementing their delegates. Letâs see how it works and how to implement it in Swift.
UI testing - How to inspect your iOS app with Calabash and Appium
Part of the journey in software development is testability. Regarding mobile development, testability for your iOS app goes through UI testing. Letâs see different way to inspect any UI elements and prepare your iOS app for UI automation testing.
Don't forget what you've accomplished this year
While wishing a happy new year around me, people helped me realised how many good things happened to me this year. Funny enough, while listing my goals for 2019, I found the matching list for 2018 and here is what really happened.
Develop your creativity with ephemeral iOS apps
From my first year studying computer science, Iâve always wanted to do more on my free time and create simple projects that could be useful for others. I wonât lie, I wish I was able to monetize them but regardless the outcome, learning was always part of the journey.
Design pattern in Swift - Observers
During this year, I have blogged quite a bit about code architecture in Swift and Iâve realized that I didnât explain much about which design pattern to use with it. In a series of coming posts, I will cover different design patterns, starting now with observer.
Build a visual search app with TensorFlow in less than 24 hours
For a while now, I really wanted to work on a machine learning project, especially since Apple let you import trained model in your iOS app now. Last September, I took part of a 24h hackathon for an e-commerce business, that was my chance to test it. The idea was simple: a visual search app, listing similar products based on a picture.
Always keep your skills sharp
It has been couple months since my last post and despite the idea, a lot of things kept me busy far from blogging. Looking back, it all articulates around the same idea: why itâs important to always keep your skills sharp.
How to detect if your iOS app hits product market fit
Couple months ago, Iâve built an app and released it on the App Store. Since published, I really wanted to see how it lives and understand how to make it grow. Ideally, I wanted to know if there is a product / market fit. In the article, I describe each steps and ideas that helped my app grow and what I learnt from it.
The best way to encode and decode JSON in Swift4
Most of mobile apps interact at some point with remote services, fetching data from an api, submitting a form⊠Letâs see how to use Codable in Swift to easily encode objects and decode JSON in couple lines of codes.
Why choosing XCUITest framework over Appium for UI automation testing
I recently went for a Swift conference and UI automation testing was one of the subject. I already mentioned it with Appium in the past but I think itâs time to go back to it and explain why today I still prefer using Appleâs testing framework instead.
Why and how to add home screen shortcut for your iOS app
I recently implemented 3D touch for an app and I was very interested about home screen quick actions. If it can be a good way to improve user experience, it doesnât mean your app always needs it. In this article, I explain how to add home screen shortcut for your app in Swift but mostly why can justify implementing it.
What I learn from six years of blogging
I recently realised that my first blog post was 6 years ago. Itâs a good occasion for me to do a little retrospective and share what I learnt from blogging over the years.
Error handling in MVVM architecture in Swift
If you care about user experience, error handling is a big part you have to cover. We can design how an mobile app looks like when it works, but what happen when something goes wrong. Should we display an alert to the user? Can the error stay silent? And mostly how to implement it the best way with your current design pattern? Letâs see our options while following MVVM pattern.
From the idea of an iOS app to App Store in 10 hours
The best way to learn and become more creative as a developer is to focus on a side project. A really good friend coming back from Japan came to me with an idea when I needed that side project. This is how we created Japan Direct, from the idea to the App Store in almost no time.
How to optimise your UICollectionView implementation in Swift
For the last couple weeks, I tried to step back on my development to analyse what is time consuming in mobile development. I realised that most of new views are based on same approach, reimplementing an similar structure around a UICollectionView or UITableView.
What if I can have a more generic approach where I can focus only on what matters, the user experience. Thatâs what I tried to explore in this article.
Support universal links in your iOS app
Last couple weeks, I have traveled with only my iPhone with me and I realised how many apps I daily used still relying on their websites. Even with the right iOS app installed, I had to browse on Safari app to get specific details. That is why itâs so important to support universal links in iOS. Let me show you how.
Make the most of enumerations in Swift
Enumerations have changed a lot between Objective-C and Swift. We can easily forget how useful and powerful it can. I wanted to get back to it through simple examples to make the most of it.
How to integrate Firebase in your iOS app
Firebase is a set of tools introduced by Google to build better mobile apps. I worked with this many times and even if itâs straight forward to integrate, here are couple advices of implementation to make the most of it.
From lean programming to growth marketing
I recently followed a growth marketing course, introducing mindset and methodology to make a company grow. I learnt a lot from it and since, I try to apply this knowledge on a daily basis. After more reflection on it, a lot of ideas looked very similar to software development job, this is the part I would like to share.
Introduction to Protocol-Oriented Programming in Swift
When I started coding years ago, it was all about object oriented programming. With Swift, a new approach came up, making the code even easier to reuse and to test, Protocol-Oriented Programming.
Why you should abstract any iOS third party libraries
If you have an iOS app, you might have integrated external libraries and tools to help you getting your product ready faster. However your iOS architecture and swift code shouldnât depend on those libraries.
Optimise Xcode build to speed Fastlane
The best part of continuous integration is the ability to automatically run tests and build apps, ready to be deployed. However, automatic build doesnât mean smart or optimised build. Here are some tips I collected along the way to speed up delivery process.
Unit Testing your MVVM architecture in Swift
To be sure new code wonât break old one already implemented, itâs best practice to write unit tests. When it comes to app architectures, it can be a challenge to write those tests. Following an MVVM pattern, how to unit test a view and its viewModel? Thatâs what I would like to cover here using dependency injection.
How to implement MVVM pattern in Swift from scratch
Creating a new app often raise the question of what architecture to choose, which pattern would fit best. In this post, I show how to implement an MVVM pattern around a sample app in Swift.
Kronos, an iOS app to make runners love numbers
In 2017, I managed to run about 750 miles (1200 km), itâs 250 miles more than the year before. I know it because Strava tracked it for me. Iâm such a fan of their product than using it becomes part of my routine and my training. Although, during that journey, I always missed numbers that talked to me. That is how I created Kronos.
Starting your year the right way
Starting a new year is always exciting. Most of us have new resolutions and a bucket list we want to accomplish for 2018 but itâs quite often that as soon something go wrong, the whole list goes wrong. Here is some advices to keep track on it.
Do you need a Today extension for your iOS app?
For the last couple months, I observed Today extensions of some of iOS apps I daily use to see when those widgets are useful and how to justify developing one. Here are my conclusions.
Face detection in iOS with Core ML and Vision in Swift
With iOS11, Apple introduced the ability to integrate machine learning into mobile apps with Core ML. As promising as it sounds, it also has some limitations, letâs discover it around a face detection sample app.
Making five years in three
I always thought a good way to stay motivated and look forward is to have goal you can accomplish in a short term, about 3 to 12 months maximum. Itâs at least the way I dealt with my life after being graduated.
How to use Javascript with WKWebView in Swift
Embedding web into native apps is a frequent approach to quickly add content into a mobile app. It can be for a contact form but also for more complex content to bootstrap a missing native feature. But you can go further and build a two bridge between Web and Mobile using JavaScript and Swift.
Using Charles as SSL Proxy on iOS
Most of apps use HTTPS request to access data, and because of SSL encryption, it can be tough to debug it from iOS apps that are already on the App Store. Charles is the perfect tool to help you inspect your HTTPS requests.
Create your private CocoaPod library
Libraries and external dependencies have always been a good way to avoid developers recreate something already existing. Itâs also a good way to help each other and leaving something reusable. CocoaPods is the most used tool to manage dependencies around Xcode projects. Letâs see how to create your own private pod.
How to be what you want to be
Starting 2017, I decided that this year would be mine. It doesnât mean everything would be given, but I would stay open to new opportunities and stay actor of my life, be what I want to be. Half way, here is time for reflection.
Build your Android app with Bitbucket Pipeline and HockeyApp
Configuring a continuous integration can be tricky for mobile apps. Letâs see how quick it is to build an Android app with Bitbucket Pipeline and deliver it with App Center app (ex HockeyApp).
How to migrate from WordPress to a static website with Hugo and AWS
Recently, I got a reminder that my domain name and shared host would eventually expire this summer. I always had a WordPress for my website and thought it was time to move on for something easier to maintain. Here is how I managed to migrate my WordPress blog to a static website with Hugo on AWS.
10 weeks training with running mobile apps
This year, I finally signed up for a marathon and the way I use running apps and their services have clearly changed. Giving the best user experience around those services is essential to make the app useful. Here is my feedback as a mobile developer during my last 10 weeks training.
French Election 2017, don't get fooled by surveys
Technology has never been as important as today in politics. Everything is related to numeric data. If we only analyze news around US elections in 2016, it was mostly about email hacks, fake news in daily news feed, or online surveys. Concerned about French elections 2017, I wanted to be a bit more active and do something related the last one: to online surveys.
Six months of Android development
In my current role at Qudini, I started as an iOS developer. My main task was to create and improve our mobile products for iOS devices based on what was already done on Android. However I wanted to be more efficient in my job and I thought it could be by impacting more users through Android development. Once our iOS apps were at the same level as the Android one, I push the idea that it would be better I start doing Android too. Here is my feedback after 6 months developing on Android.
Feature flag your mobile app with Apptimize
Recently, I got the chance to integrate feature flags into a mobile app I work on. The idea of feature flag is simple, it lets you enable and manage features in your mobile app remotely without requiring a new release. Let see the benefice of it and how integrate a feature flag solution like Apptimizeâs one.
Xcode script automation for SauceLabs
Couple months ago, Iâve tried to set a mobile testing environment with Appium and one of the best tools to execute these tests was SauceLabs, a cloud platform dedicated for testing. SauceLabs is pretty easy to use but here is couple tricks to make even easier.
Mobile continuous delivery with bitrise
Continuous integration and continuous delivery is something I wanted to do a while ago, specially since Apple accelerated its approval process to publish new apps on its mobile store. It can now takes less than a day to have an update available for your mobile users: continuous integration and continuous delivery makes more sense than ever on mobile apps.
How can a developer do marketing?
Working as a mobile developer, I created multiple apps during last couple years for companies I worked for, and eventually for personal projects. At the beginning, I though the goal for any developer was the release itself: shipping code and moving on, but I quickly found out that it was more frustrating than everything to stop here. Thatâs how I started thinking about what should be the next step and if a developer can actually do marketing and how.
Growth Hacking applied to your LinkedIn profile to get a new job
I recently finished Growth Hacking Marketing by Ryan Holiday and learn a lot of things about it. Some of them remembered me the way I found my job in London and how I tweaked my LinkedIn profile to fit the targeted audience.
How to create an iOS app for Sens'it tracker in Swift
Sensâit is small tracker developed by Sigfox and given for free during events to let people test the Sigfox low frequency IoT network. Letâs see how to create an iOS app in Swift based on Sensâit api.
How to keep your privacy in mobile apps
Couple years ago, I worked on a mobile app linked to video and audio recording. I quickly see that, once the user agreed for permissions, it can be easy to track personal data without user noticed it. Let see how limit mobile app permissions to maintain user privacy.
Appium, when automation testing can be randomly wrong
Appium is an UI automation testing framework, helping developers to automatically test their app. This tool can be really powerful but my experience with it let me think itâs not enough accurate to be used everyday and at its full potential.
UI Automation testing on iOS9
During WWDC2015, Apple announced big stuff, but they also released awesome features for developers. One of them was dedicated to UI Testing. Working around UI Automation test, Iâve just discovered last Xcode 7 and how life is going to be easier with their last feature for that.
How to work with native iOS and javascript callbacks in Objective-C
Recently I worked on a small iOS mobile project around Javascript. I wanted to load web content from iOS with Javascript inside and get callbacks from Javascript into iOS, to save native data and transmit it to an other controller if needed. The second part was also to call Javascript methods from iOS part.
AmbiMac, an app creating your own ambilight
Philips created few years ago Ambilight, a TV with a dynamic lights on it back. With two friends, we wanted to design an app with a similar function based on connected light bulb during an hackathon. Here is what we have done in 24h hours of code, letâs meet AmbiMac.
Introduction to sleep analysis with HealthKit with Swift
HealthKit is a powerful tool if you want to create an iOS mobile app based on health data. However, itâs not only for body measurements, fitness or nutrition; itâs also sleep analysis. In this HealthKit tutorial, I will show you how to read and write some sleep data and save them in Health app.
UPDATE - April 2020: Originally written for Swift 1.0, then 2.0, Iâve updated this post for latest Swift 5.1 version and Xcode 11.3.
Dynamic url rewriting in CodeIgniter
I work with CodeIgniter almost exclusively on API, but sometimes it can help on short-lived websites. Rewrite url is a good thing to know if you want to optimize SEO for your key pages of a website. Thatâs what I want to show you and how itâs easy to set it up.
Le métier de développeur dans les objets connectés
Pour la fin de mes Ă©tudes, jâai choisi de rĂ©diger mon mĂ©moire sur les objets connectĂ©s et plus prĂ©cisĂ©ment sur le dĂ©veloppement de services numĂ©riques autour de ces objets. Ce travail de fond mâa permis de prendre du recul sur mon travail mais câĂ©tait aussi lâoccasion de trouver une dĂ©finition de ce quâest un dĂ©veloppeur dâobjet connectĂ©.
Majordhome, le projet né durant un startup weekend
En Octobre dernier, jâavais travaillĂ© sur le cocktailMaker, un objet connectĂ© facilitant la crĂ©ation de cocktails. Voulant pousser le concept un peu plus loin, je me suis inscrit au startup weekend de Novembre organisĂ© Ă lâEM Lyon pour dĂ©couvrir les aspects marketing et business qui me manque aujourdâhui. Retour sur ces 54h de travail acharnĂ©.
Les difficultés autour des objets connectés
Ces temps ci, il y a beaucoup de bruits autour des objets connectĂ©s. Tous les jours, on dĂ©couvre de nouveaux articles sur des objets connectĂ©s annoncĂ©s sur le marchĂ© ou financĂ©s sur des plateformes de âcrowdfundingâ. On a bien moins dâinformations sur toutes les difficultĂ©s liĂ©es autour de ces projets innovants. Voici mes conclusions sur les recherches que jâai faites Ă ce sujet.
CocktailMaker, l'objet connecté 100% hackathon
LâannĂ©e derniĂšre Ă cette mĂȘme pĂ©riode, jâai participĂ© au Fhacktory, ce hackathon nouvelle gĂ©nĂ©ration nĂ© Ă Lyon, avec une application mobile dĂ©diĂ©e Ă la chute libre. Cette annĂ©e, jâai pu Ă nouveau monter sur le podium de cet Ă©vĂšnement en dĂ©veloppement un objet connectĂ©, le CocktailMaker. Retour sur ce week-end 100% hack.
Comment Jawbone s'adapte Ă l'Internet des Choses
Sur la place des objets connectĂ©s, Jawbone est rapidement devenu un pilier du âquantified-selfâ (auto-mesure) avec ses bracelets UP et UP24. Je vous propose un dĂ©cryptage des leurs derniĂšres Ă©volutions afin de rester Ă la pointe du âwearableâ.
Moto360 ou Withings Activité
De plus en plus de montres connectĂ©es font leur apparition, mais dâaprĂšs moi, la plupart passe Ă cĂŽtĂ© de lâessentiel: la montre reste lâun des seuls accessoires masculin, il faut donc la rendre Ă©lĂ©gante en respectant sa forme historique. Câest pourquoi, je mâintĂ©resse dans cet article principalement aux montres âhabillĂ©esâ et en attendant la sortie de celle dâApple, je vous propose un comparatif entre la montre connectĂ©e de Motorola et celle de Withings, fraichement annoncĂ©e.
Mes premiers pas vers le Lean Startup
Ne voulant pas me limiter Ă mon background technique, jâessaie de plus en plus de dĂ©velopper des notions dâentrepreneuriat dans lâidĂ©e dâĂȘtre plus utile dans mon analyse technique et de continuer la reflexion autour de diffĂ©rents dĂ©veloppement dâapplications dans une start-up. LâidĂ©e est de ne pas se limiter au dĂ©veloppement demandĂ©, mais dâessayer dâapprĂ©hender toute la chaine de rĂ©flexion, Ă savoir du besoin de clients jusquâĂ lâutilisation dâun nouveau service/produit dĂ©veloppĂ© et de voir comment celui-ci est utilisĂ© et ce quâil faut amĂ©liorer.
Pour cela, et avec les conseils avisĂ©s dâun ami , Maxime Salomon, jâai commencĂ© Ă lire The Lean Startup de Eric Ries. Ce livre aborde de nombreux sujets autour de lâentrepreneuriat, du marketing ainsi que de dĂ©veloppement de produit Ă proprement parlĂ©. LâidĂ©e est de proposer un cycle itĂ©ratif de dĂ©veloppement pouvant permettre de mesurer rapidement diffĂ©rents paramĂštres pour faire Ă©voluer un produit en fonction de nouvelles donnĂ©es.
Etant dâun formation plus scientifique, jâai ce besoin de mettre en pratique ce dont il est question pour mieux comprendre la solution proposĂ©e, jâai aussi un besoin de me documenter sur les diffĂ©rents termes employĂ©s pour ne pas passer Ă cĂŽtĂ© du sujet, câest pourquoi je prends mon temps pour lire ce livre, mais je vous propose mon retour dâexpĂ©rience sur mes premiers acquis et comment jâessaie de les mettre en pratique.
UP24 - Découverte du bracelet connecté de Jawbone
Nous dĂ©couvrons chaque jour de plus en plus dâobjets connectĂ©s, ils se divisent en plusieurs catĂ©gories comme la santĂ©, la musique, la lumiĂšre, etc. Une bonne partie se retrouve aussi dans le tracking dâactivitĂ© comme le bracelet Jawbone UP. Etant intĂ©ressĂ© de connaitre les performances de ces objets connectĂ©s dit âwearableâ, je vous propose mon retour dâexperience sur le bracelet UP24 ainsi que les services proposĂ©s autour.
Introduction Ă Soundcloud
Soundcloud est une des plus grosses plateformes de musique indĂ©pendante, câest plus de 200 millions dâutilisateurs pour ce rĂ©seau sociale basĂ© sur le partage musicale. Certains artistes ne publient leurs musiques que sur cette plateforme. Câest aussi la place pour des novices qui veulent essayer leurs titres et se faire connaitre. Vous pouvez aussi y retrouver des discours, des podcasts et tout autres types de contenu audio.
Dans cette optique de toujours avoir de la bonne musique, Soundcloud est disponible sur toutes les plateformes (web et mobile) et lâĂ©coute est gratuite. Pour une utilisation encore plus variĂ©e de leur service, SoundCloud propose une API ainsi que de nombreux SDK (Javascript, Ruby, Python, PHP, Cocoa et Java). Nous allons voir ensemble comment intĂ©grer SoundCloud dans une application mobile iPhone.
Comment réussir son premier entretien
Passer un entretien pour un poste est toujours un peu stressant. Suivant comment ce stress est gĂ©rĂ©, la personne peut donner une image de quelquâun qui nâest pas sĂ»re de soi par ses gestes (tremblement, bafouillement, se frotter les mains) ou par ses mots (ne pas finir ses phrases, phrases Ă rallonge trop complexe, etc). Difficile dans ces cas lĂ de donner la meilleure image de soi pour montrer quâon est travailleur, motivĂ© et prĂȘt Ă lâemploi.
Je vous propose par mon retour dâexperience quelques conseils simples.
Spotify et ses outils d'intégration
AprĂšs avoir travaillĂ© avec les technologies Deezer, nous allons voir quels outils sont proposĂ©s par Spotify pour une intĂ©gration web ou mobile. Spotify proposant une Ă©coute gratuite sur son client ordinateur et depuis peu sur mobile (parsemĂ© de publicitĂ©), il se dĂ©marque de Deezer qui nĂ©cessite dâavoir un compte Premium pour une utilisation sur smartphone. LâintĂ©gration pour les dĂ©veloppeurs est aussi diffĂ©rente, mais Ă quelle mesure? Câest ce que nous allons voir.
Hackathon: ma maison connectÄe
Les objets connectÄs sont de plus en plus prĂ©sents chez nous. On y retrouve des produits comme des ampoules, des enceintes audio ainsi que des prises intelligentes. On y retrouve aussi des produits plus innovants comme le pĂšse personne de Withings, la balle de Sphero, la lampe connectĂ©e âholĂźâ ou encore le capteur pour plante de Parrot.
Câest dans cette optique lĂ que lâentreprise Direct Energie a organisĂ©e un hackathon autour des objets connectĂ©s pour prĂ©senter diffĂ©rentes solutions autour de la maĂźtrise dâĂ©nergie et des objets intelligents.
Câest en tant que support technique sur le produit âholĂźâ et son SDK que jây ai participĂ©, afin dâaider les dĂ©veloppeurs Ă se familiariser avec lâoutil. Ayant fait un hackathon du cĂŽtĂ© dĂ©veloppeur, câest un nouveau retour dâexpĂ©rience cette fois ci du cĂŽtĂ© partenaire.
SpriteKit, un framework iOS7 pour jeu video
Au jour dâaujourdâhui, les jeux vidĂ©os sont de plus en plus prĂ©sent. Avec lâunivers du smartphone, il est de plus en plus facile dâembarquer des jeux vidĂ©os avec nous et ce partout.
Plusieurs jeux ont eu un tel succĂšs quâil reste difficile dâignorer cet utilisation de nos tĂ©lĂ©phones en tant que console. A nâen citer que quelques-uns: DoodleJump, AngryBird ou encore le fameux CandyCrush.
Depuis la sortie dâiOS7, Apple a rajoutĂ© un framework de jeu vidĂ©o 2D directement dans son SDK: SpriteKit. Nous allons voir ensemble comment lâutiliser.
Fhacktory, un hackathon nouvelle génération
Un hackathon est lâĂ©quivalent dâun marathon sur le domaine du dĂ©veloppement informatique. Bien connu sous le systĂšme de âStartup Weekendâ, ce principe a Ă©tĂ© adaptĂ© dans lâinformatique au dĂ©veloppement de projet en un temps donnĂ©. Le but est de monter en un weekend une Ă©quipe qui Ă©voluera autour dâune idĂ©e et proposera une solution Ă un problĂšme. Jâai rĂ©cemment participĂ© Ă lâun dâentre eux, le Fhactory: un hackathon se dĂ©finissant â100% hack, 0% bullshitâ et voici mon retour dâexpĂ©rience.
à la découverte des outils de Deezer
Deezer Ă©tant lâune des plus grosse plateforme dâĂ©coute et de partage de musique, il est intĂ©ressant de voir comment se servir des diffĂ©rents outils quâil nous met Ă disposition Ă savoir son API de recherche de morceau et ses diffĂ©rents SDK pour une intĂ©gration web ou mobile.
Nous allons voir ensemble, comment les utiliser, Ă quelles fins et quelles en sont les limites. Pour le SDK, je ne mâintĂ©resserai quâĂ celui pour iOS.
iJump, une application iPhone pour les parachutistes
En lançant le portail web de mĂ©tĂ©o Weather, mon idĂ©e Ă©tait dâen faire un support pour une version mobile. En effet lâintĂ©rĂȘt pour des donnĂ©es mĂ©tĂ©orologiques est de rester nomade et suivre son utilisateur. En intĂ©grant diffĂ©rentes notions associĂ©es Ă la chute libre et avec lâaide de la FĂ©dĂ©ration Française de Parachutisme, voici iJump: lâapplication mobile pour les parachutistes.
La formation au développement mobile
Il y a maintenant 6 mois, jâai commencĂ© une formation afin de devenir enseignant sur les languages Cocoa et Objective-C.
Cette formation a compris plusieurs étapes, chacune finissant par un examen afin de passer à la suivante:
- Une partie pĂ©dagogique au cours de laquelle nous sommes Ă©valuĂ©s sur notre capacitĂ© Ă communiquer un message, Ă faire comprendre une technologie, Ă la gestion de notre temps de parole ainsi quâĂ la tenue une classe.
- Une partie technique oĂč lâĂ©valuation se portait exclusivement sur la connaissance des technologies auxquelles je mâĂ©tais proposĂ©. Pour ma part, cela mâa permis de revoir les fondements de Cocoa ainsi que de lâhistorique la sociĂ©tĂ© NeXT.
Voici mes différents retours sur ma premiÚre experience de formateur.
Sencha Touch: framework HTML5 pour application mobile
Introduction:
Sencha est un framework HTML5 pour crĂ©er des application mobiles multiplateformes. LâintĂ©rĂȘt de celui-ci est de faire, Ă partir dâun projet HTML et de code JSON, une mĂȘme application mobile sur plusieurs plateformes, un gain de temps incroyable si le code sây tient. Nous allons voir les premiers pas dâune application Ă partir de Sencha.
MVVM Light sous Windows Phone 8 SDK
Le nouveau systĂšme dâexploitation Windows 8 va de paire avec la mise Ă jour de son systĂšme sur mobile: Windows Phone 8.
Voici une petite introduction Ă MVVM Light Toolkit, un jeu de composant se basant sur une structure Model-View-ViewModel sur les frameworks XAML/C#, pouvant ĂȘtre utilisĂ© pour un dĂ©veloppement sur Windows Phone 8.
Réalisation: Weather, un portail météo pour la chute libre
Contexte:
Ayant récemment été initié à la chute libre, cette discipline est largement dépendante de la météo.
Malheureusement, trouver la mĂ©tĂ©o en temps en âtemps rĂ©elâ suivant son centre de saut nâest pas chose aisĂ©. MĂȘme Ă 10km de son centre de saut, la diffĂ©rence mĂ©tĂ©orologique peut ĂȘtre significative quant Ă la pratique du parachutisme.
Câest pourquoi jâai dĂ©cidĂ© de developper un portail web permettant de consulter le dernier relevĂ© mĂ©tĂ©o de nâimporte quel centre de saut en France, datant de moins de 12h.
Intégration de DataMapper dans CodeIgniter
Introduction:
Un ORM (Object-relational mapping) est utilisĂ© dans la programmation orientĂ© objet afin de crĂ©er virtuellement un modĂšle en se basant sur une base de donnĂ©e. Cela Ă©vite de devoir Ă©crire les requĂȘtes dans la base de donnĂ©e soit mĂȘme, un vrai gain de temps.
Réalisation: iDevWeb - Mise à jour
Librairie Restkit et synchronisation de données
Introduction
La synchronisation de donnĂ©es en ligne est une pratique courante afin dâavoir un contenu mis Ă jour Ă chaque utilisation (applications dâinformations, de news et autres).
Trouver un moyen simple dâembarquer ces donnĂ©es avant une synchronisation en ligne est intĂ©ressant, permettant une utilisation de lâapplication mĂȘme si les donnĂ©es ne sont pas Ă jour.
Travaillant en Objective-C sur des applications mobiles pour iphone/ipad, nous allons voir comment utiliser Restkit Ă ces fins.
Quel-camping.fr
AprĂšs avoir fini ma premiĂšre annĂ©e dâĂ©tude en informatique, jâai eu lâidĂ©e de rĂ©aliser un site internet pour une premiĂšre experience professionnelle Ă mon compte.
Des idĂ©es Ă lâĂ©tude:
AprĂšs quelques idĂ©es ainsi que des conseils avisĂ©s dâun jeune entrepreneur, jâai dĂ©cidĂ© de choisir la branche du tourisme et plus prĂ©cisĂ©ment le domaine de lâhotellerie de plein air.
En effet, ce domaine est peu exploitĂ© sur internet alors que le nombre de rĂ©servation de sĂ©jour en camping continuait dâaugmenter.
Réalisation: iDevWeb - Gestion de projets web
Quand on est dĂ©veloppeur web, il arrive quâon travaille sur plusieurs projets en mĂȘme temps et quâon conserve dâanciens projets sans les supprimer.
En utilisant MAMP sous MAC OS X, il faut accĂ©der Ă lâurl exacte du dossier pour pouvoir accĂ©der au site web, il nâexiste pas par dĂ©faut une page qui indexe les dossiers contenus dans le dossier de dĂ©veloppement.
Câest lĂ que jâai eu lâidĂ©e de dĂ©velopper un petit portail en php qui listerait les dossiers contenus dans mon dossier de dĂ©veloppement, cela Ă©viterait de devoir se rappeler du nom du projet ainsi que du chemin exacte pour y accĂ©der.
Réécriture d'urls avec htaccess sous CodeIgniter
Le principe de réécriture dâurls permet de âtransformerâ les urls pour rĂ©fĂ©rencer plus simplement des pages clĂ©s dâun site internet. Pour cela on utilise le fichier htaccess, un fichier cachĂ© situĂ© Ă la racine du dossier de lâapplication.
Nous allons voir comment est géré par défaut les urls dans le framework CodeIgniter et comment les modifier pour éviter de perdre le référencement déjà acquis sur un site web.
CodeIgniter et son modĂšle MVC
CodeIgniter est un framework php open source basé sur une architecture MVC.
Rappel:
Lâarchitecture MVC (ModĂšle â Vue â Controller) permet dâorganiser plus simplement une application.
- ModÚle : type de données, objet
- Vue: interface avec lâutilisateur
- ContrÎleur: traitement des données, gestion des évÚnements.
Un framework est un kit qui permet de crĂ©er la base dâune application plus rapidement et avec une structure plus solide.
Présentation:
CodeIgniter a pour avantage dâĂȘtre libre mais surtout dâĂȘtre plus lĂ©ger comparĂ© aux autres frameworks php connus. Il possĂšde un âguide utilisateurâ (en ligne sur le site officiel et localement dans le dossier tĂ©lĂ©chargĂ©) plus que complet qui propose de nombreux exemples dâapplications. La mise en place est intuitive et aucune configuration nâest nĂ©cessaire pour une utilisation simple.

Tips, Tricks, and Techniques on using Cascading Style Sheets.
We Completely Missed width/height: stretch
- Articles
- box model
- css properties
- layout
The TL;DR is that stretch
does the same thing as declaring 100%
, but ignores padding when looking at the available space.
We Completely Missed width/height: stretch originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The stretch keyword, which you can use with width and height (as well as min-width, max-width, min-height, and max-height, of course), was shipped in Chromium web browsers back in June 2025. But the value is actually a unification of the non-standard -webkit-fill-available and -moz-available values, the latter of which has been available to use in Firefox since 2008. The issue was that, before the @supports at-rule, there was no nice way to implement the right value for the right web browser, and I suppose we just forgot about it after that until, whoops, one day I see Dave Rupert casually put it out there on Bluesky a month ago: Layout pro Miriam Suzanne recorded an explainer shortly thereafter. Itâs worth giving this value a closer look. What does stretch do? The quick answer is that stretch does the same thing as declaring 100%, but ignores padding when looking at the available space. In short, if youâve ever wanted 100% to actually mean 100% (when using padding), stretch is what youâre looking for: div { padding: 3rem 50vw 3rem 1rem; width: 100%; /* 100% + 50vw + 1rem, causing overflow */ width: stretch; /* 100% including padding, no overflow */ } The more technical answer is that the stretch value sets the width or height of the elementâs margin box (rather than the box determined by box-sizing) to match the width/height of its containing block. Note: Itâs never a bad idea to revisit the CSS Box Model for a refresher on different box sizings. And on that note â yes â we can achieve the same result by declaring box-sizing: border-box, something that many of us do, as a CSS reset in fact. *, ::before, ::after { box-sizing: border-box; } I suppose that itâs because of this solution that we forgot all about the non-standard values and didnât pay any attention to stretch when it shipped, but I actually rather like stretch and donât touch box-sizing at all now. Yay stretch, nay box-sizing There isnât an especially compelling reason to switch to stretch, but there are several small ones. Firstly, the Universal selector (*) doesnât apply to pseudo-elements, which is why the CSS reset typically includes ::before and ::after, and not only are there way more pseudo-elements than we might think, but the rise in declarative HTML components means that weâll be seeing more of them. Do you really want to maintain something like the following? *, ::after, ::backdrop, ::before, ::column, ::checkmark, ::cue (and ::cue()), ::details-content, ::file-selector-button, ::first-letter, ::first-line, ::grammar-error, ::highlight(), ::marker, ::part(), ::picker(), ::picker-icon, ::placeholder, ::scroll-button(), ::scroll-marker, ::scroll-marker-group, ::selection, ::slotted(), ::spelling-error, ::target-text, ::view-transition, ::view-transition-image-pair(), ::view-transition-group(), ::view-transition-new(), ::view-transition-old() { box-sizing: border-box; } Okay, Iâm being dramatic. Or maybe Iâm not? I donât know. Iâve actually used quite a few of these and having to maintain a list like this sounds dreadful, although Iâve certainly seen crazier CSS resets. Besides, you might want 100% to exclude padding, and if youâre a fussy coder like me you wonât enjoy un-resetting CSS resets. Animating to and from stretch Opinions aside, thereâs one thing that box-sizing certainly isnât and thatâs animatable. If you didnât catch it the first time, we do transition to and from 100% and stretch: Because stretch is a keyword though, youâll need to interpolate its size, and you can only do that by declaring interpolate-size: allow-keywords (on the :root if you want to activate interpolation globally): :root { /* Activate interpolation */ interpolate-size: allow-keywords; } div { width: 100%; transition: 300ms; &:hover { width: stretch; } } The calc-size() function wouldnât be useful here due to the web browser support of stretch and the fact that calc-size() doesnât support its non-standard alternatives. In the future though, youâll be able to use width: calc-size(stretch, size) in the example above to interpolate just that specific width. Web browser support Web browser support is limited to Chromium browsers for now: Opera 122+ Chrome and Edge 138+ (140+ on Android) Luckily though, because we have those non-standard values, we can use the @supports at-rule to implement the right value for the right browser. The best way to do that (and strip away the @supports logic later) is to save the right value as a custom property: :root { /* Firefox */ @supports (width: -moz-available) { --stretch: -moz-available; } /* Safari */ @supports (width: -webkit-fill-available) { --stretch: -webkit-fill-available; } /* Chromium */ @supports (width: stretch) { --stretch: stretch; } } div { width: var(--stretch); } Then later, once stretch is widely supported, switch to: div { width: stretch; } In a nutshell While this might not exactly win Feature of the Year awards (I havenât heard a whisper about it), quality-of-life improvements like this are some of my favorite features. If youâd rather use box-sizing: border-box, thatâs totally fine â it works really well. Either way, more ways to write and organize code is never a bad thing, especially if certain ways donât align with your mental model. Plus, using a brand new feature in production is just too tempting to resist. Irrational, but tempting and satisfying! We Completely Missed width/height: stretch originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The thing about contrast-color
- Links
- color
- CSS functions
One of our favorites, Andy Clarke, on the one thing keeping the CSS contrast-color()
function from true glory:
For my website design, I chose a dark blue background colour (
#212E45
) and light text (#d3d5da
). This
âŠ
The thing about contrast-color originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
One of our favorites, Andy Clarke, on the one thing keeping the CSS contrast-color() function from true glory: For my website design, I chose a dark blue background colour (#212E45) and light text (#d3d5da). This colour is off-white to soften the contrast between background and foreground colours, while maintaining a decent level for accessibility considerations. But hereâs the thing. The contrast-color() function chooses either white for dark backgrounds or black for light ones. At least to my eyes, that contrast is too high and makes reading less comfortable, at least for me. Word. White and black are two very safe colors to create contrast with another color value. But the amount of contrast between a solid white/black and any other color, while offering the most contrast, may not be the best contrast ratio overall. This was true when added a dark color scheme to my personal website. The contrast between the background color, a dark blue (hsl(238.2 53.1% 12.5%), and solid white (#fff) was too jarring for me. To tone that down, Iâd want something a little less opaque than what, say hsl(100 100% 100% / .8), or 20% lighter than white. Canât do that with contrast-color(), though. Thatâs why I reach for light-dark() instead: body { color: light-dark(hsl(238.2 53.1% 12.5%), hsl(100 100% 100% / .8)); } Will contrast-color() support more than a black/white duo in the future? The spec says yes: Future versions of this specification are expected to introduce more control over both the contrast algorithm(s) used, the use cases, as well as the returned color. Iâm sure itâs one of those things that âs easier said than done, as the ârightâ amount of contrast is more nuanced than simply saying itâs a ratio of 4.5:1. There are user preferences to take into account, too. And then it gets into weeds of work being done on WCAG 3.0, which Danny does a nice job summarizing in a recent article detailing the shortcomings of contrast-color(). The thing about contrast-color originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Getting Creative With shape-outside
- Articles
- css shapes
- images
- UI/IX Design
There are so many creative opportunities for using shape-outside
that Iâm surprised I see it used so rarely. So, how can you use it to add personality to a design? Hereâs how I do it.
Getting Creative With shape-outside originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Last time, I asked, âWhy do so many long-form articles feel visually flat?â I explained that: âImages in long-form content can (and often should) do more than illustrate. They can shape how people navigate, engage with, and interpret what theyâre reading. They help set the pace, influence how readers feel, and add character that words alone canât always convey.â Then, I touched on the expressive possibilities of CSS Shapes and how, by using shape-outside, you can wrap text around an imageâs alpha channel to add energy to a design and keep it feeling lively. There are so many creative opportunities for using shape-outside that Iâm surprised I see it used so rarely. So, how can you use it to add personality to a design? Hereâs how I do it. Patty Meltt is an up-and-coming country music sensation. My brief: Patty Meltt is an up-and-coming country music sensation, and she needed a website to launch her new album and tour. She wanted it to be distinctive-looking and memorable, so she called Stuff & Nonsense. Pattyâs not real, but the challenges of designing and developing sites like hers are. Most shape-outside guides start with circles and polygons. Thatâs useful, but it answers only the how. Designers need the why â otherwise itâs just another CSS property. Whatever shape its subject takes, every image sits inside a box. By default, text flows above or below that box. If I float an image left or right, the text wraps around the rectangle, regardless of whatâs inside. Thatâs the limitation shape-outside overcomes. shape-outside lets you break free from those boxes by enabling layouts that can respond to the contours of an image. That shift from images in boxes to letting the image content define the composition is what makes using shape-outside so interesting. Solid blocks of text around straight-edged images can feel static. But text that bends around a guitar or curves around a portrait creates movement, which can make a story more compelling and engaging. CSS shape-outside enables text to flow around any custom shape, including an imageâs alpha channel (i.e., the transparent areas): img { float: left; width: 300px; shape-outside: url('patty.webp'); shape-image-threshold: .5; shape-margin: 1rem; } First, a quick recap. For text to flow around elements, they need to float either left or right and have their width defined. The shape-outside URL selects an image with an alpha channel, such as a PNG or WebP. The shape-image-threshold property sets the alpha channel threshold for creating a shape. Finally, thereâs the shape-margin property which â believe it or not â creates a margin around the shape. Interactive examples from this article are available in my lab. Multiple image shapes When Iâm adding images to a long-form article, I ask myself, âHow can they help shape someoneâs experience?â Flowing text around images can slow people down a little, making their experience more immersive. Visually, it brings text and image into a closer relationship, making them feel part of a shared composition rather than isolated elements. Columns without shape-outside applied to them Pattyâs life story â from singing in honky-tonks to headlining stadiums â contains two sections: one about her, the other about her music. I added a tall vertical image of Patty to her biography and two smaller album covers to the music column: <section id="patty"> <div> <img src="patty.webp" alt=""> [...] </div> <div> <img src="album-1.webp" alt=""> [...] <img src="album-2.webp" alt=""> [...] </div> </section> A simple grid then creates the two columns: #patty { display: grid; grid-template-columns: 2fr 1fr; gap: 5rem; } I like to make my designs as flexible as I can, so instead of specifying image widths and margins in static pixels, I opted for percentages on those column widths so their actual size is relative to whatever the size of the container happens to be: #patty > *:nth-child(1) img { float: left; width: 50%; shape-outside: url("patty.webp"); shape-margin: 2%; } #patty > *:nth-child(2) img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("album-1.webp"); shape-margin: 2%; } #patty > *:nth-child(2) img:nth-of-type(2) { float: right; width: 45%; shape-outside: url("album-2.webp"); shape-margin: 2%; } Columns with shape-outside applied to them. See this example in my lab. Text now flows around Pattyâs tall image without clipping paths or polygons â just the natural silhouette of her image shaping the text. Building rotations into images. When an image is rotated using a CSS transform, ideally, browsers would reflow text around its rotated alpha channel. Sadly, they donât, so itâs often necessary to build that rotation into the image. shape-outside with a faux-centred image For text to flow around elements, they need to be floated either to the left or right. Placing an image in the centre of the text would make Pattyâs biography design more striking. But thereâs no center value for floats, so how might this be possible? Pattyâs image set between two text columns. See this example in my lab. Pattyâs bio content is split across two symmetrical columns: #dolly { display: grid; grid-template-columns: 1fr 1fr; } To create the illusion of text flowing around both sides of her image, I first split it into two parts: one for the left and the other for the right, both of which are half, or 50%, of the original width. Splitting the image into two pieces. Then I placed one image in the left column, the other in the right: <section id="dolly"> <div> <img src="patty-left.webp" alt=""> [...] </div> <div> <img src="patty-right.webp" alt=""> [...] </div> </section> To give the illusion that text flows around both sides of a single image, I floated the left columnâs half to the right: #dolly > *:nth-child(1) img { float: right; width: 40%; shape-outside: url("patty-left.webp"); shape-margin: 2%; } âŠand the right columnâs half to the left, so that both halves of Pattyâs image combine right in the middle: #dolly > *:nth-child(2) img { float: left; width: 40%; shape-outside: url("patty-right.webp"); shape-margin: 2%; } Faux-centred image. See this example in my lab. Faux background image So far, my designs for Pattyâs biography have included a cut-out portrait with a clearly defined alpha channel. But, I often need to make a design that feels looser and more natural. Faux background image. See this example in my lab. Ordinarily, I would place a picture as a background-image, but for this design, I wanted the content to flow loosely around Patty and her guitar. Large featured image So, I inserted Pattyâs picture as an inline image, floated it, and set its width to 100%; <section id="kenny"> <img src="patty.webp" alt=""> [...] </section> #kenny > img { float: left; width: 100%; max-width: 100%; } There are two methods I might use to flow text around Patty and her guitar. First, I might edit the image, removing non-essential parts to create a soft-edged alpha channel. Then, I could use the shape-image-threshold property to control which parts of the alpha channel form the contours for text wrapping: #kenny > img { shape-outside: url("patty.webp"); shape-image-threshold: 2; } Edited image with a soft-edged alpha channel However, this method is destructive, since much of the texture behind Patty is removed. Instead, I created a polygon clip-path and applied that as the shape-outside, around which text flows while preserving all the detail of my original image: #kenny > img { float: left; width: 100%; max-width: 100%; shape-outside: polygon(âŠ); shape-margin: 20px; } Original image with a non-destructive clip-path. I have little time for writing polygon path points by hand, so I rely on Bennett Feelyâs CSS clip-path maker. I add my image URL, draw a custom polygon shape, then copy the clip-path values to my shape-outside property. Bennett Feelyâs CSS clip path maker. Text between shapes Patty Meltt likes to push the boundaries of country music, and I wanted to do the same with my design of her biography. I planned to flow text between two photomontages, where elements overlap and parts of the images spill out of their containers to create depth. Text between shapes. See this example in my lab. So, I made two montages with irregularly shaped alpha channels. Irregularly shaped alpha channels I placed both images above the content: <section id="johnny"> <img src="patty-1.webp" alt=""> <img src="patty-2.webp" alt=""> [âŠ] </section> âŠand used those same image URLs as values for shape-outside: #johnny img:nth-of-type(1) { float: left; width: 45%; shape-outside: url("patty-1.webp"); shape-margin: 2%; } #johnny img:nth-of-type(2) { float: right; width: 35%; shape-outside: url("img/patty-2.webp"); shape-margin: 2%; } Content now flows like a river in a country song, between the two image montages, filling the design with energy and movement. Conclusion Too often, images in long-form content end up boxed in and isolated, as if they were dropped into the page as an afterthought. CSS Shapes â and especially shape-outside â give us a chance to treat images and text as part of the same composition. This matters because design isnât just about making things usable; itâs about shaping how people feel. Wrapping text around the curve of a guitar or the edge of a portrait slows readers down, invites them to linger, and makes their experience more immersive. It brings rhythm and personality to layouts that might otherwise feel flat. So, next time you reach for a rectangle, pause and think about how shape-outside can help turn an ordinary page into something memorable. Getting Creative With shape-outside originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Same Idea, Different Paint Brush
- Articles
- inspiration
Naturally, everything looks like code when I'm staring at a blank canvas. That's whether the canvas is paper, a screen, some Figma artboard, or what have you.
Same Idea, Different Paint Brush originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Thereâs the idiom that says everything looks like a nail when all you have is a hammer. I also like the one about worms in horseradish seeing the world as horseradish. Thatâs what it felt like for me as I worked on music for an album of covers I released yesterday. I was raised by my mother, a former high school art teacher (and a gifted artist in her own right), who exposed me to a lot of different tools and materials for painting and drawing. Iâm convinced thatâs what pointed me in the direction of web development, even though weâre talking years before the internet of AOL and 56K dial-up modems. And just as thereâs art and craft to producing a creative 2D visual on paper with wet paint on a brush, thereâs a level of art and craft to designing user interfaces that are written in code. You might even say thereâs a poetry to code, just as thereâs code to writing poetry. Iâve been painting with code for 20 years. HTML, CSS, JavaScript, and friends are my medium, and Iâve created a bunch of works since then. I know my mom made a bunch of artistic works in her 25+ years teaching and studying art. In a sense, weâre both artists using a different brush to produce works in different mediums. Naturally, everything looks like code when Iâm staring at a blank canvas. Thatâs whether the canvas is paper, a screen, some Figma artboard, or what have you. Code is my horseradish and Iâve been marinating in this horseradish ocean for quite a while. This is whatâs challenging to me about performing and producing an album of music. The work is done in a different medium. The brush is no longer code (though it can be) but sounds, be them vibrations that come from a physical instrument or digital waves that come from a programmed beat or sample. There are parallels between painting with code and painting with sound, and it is mostly a matter of approach. The concepts, tasks, and challenges are the same, but the brush and canvas are totally different. Whatâs in your stack? Sound is no different than the web when it comes to choosing the right tools to do the work. Just as you need a stack of technical tools to produce a website or app, you will need technical tools to capture and produce sounds, and the decision affects how that work happens. For example, my development environment might include an editor app for writing code, a virtual server to see my work locally, GitHub for version control and collaboration, some build process that compiles and deploys my code, and a host that serves the final product to everyone on the web to see. Making music? I have recording software, microphones, gobs of guitars, and an audio interface that connects them together so that the physical sounds I make are captured and converted to digital sound waves. And, of course, I need a distributor to serve the music to be heard by others just as a host would serve code to be rendered as webpages. Can your websiteâs technical stack be as simple as writing HTML in a plain text editor and manually uploading the file to a hosting service via FTP? Of course! Your albumâs technical stack can just as easily be a boombox with a built in mic and recording. Be as indie or punk as you want! Either way, youâve gotta establish a working environment to do the work, and that environment requires you to make decisions that affect the way you work, be it code, music, or painting for that matter. Personalize your process and make it joyful. Itâs the âRecording Experienceâ (EX) to what we think of as Developer Experience (DX). Whatâre you painting on? If youâre painting, it could be paper. But what kind of paper? Is college-rule cool or do you need something more substantial with heavier card stock? Youâre going to want something that supports the type of paint youâre using, whether itâs oil, water, acrylic⊠or lead? That wouldnât be good. On the web, youâre most often painting on a screen that measures its space in pixel units. Screens are different than paper because theyâre not limited by physical constraints. Sure, the hardware may pose a constraint as far as how large a certain screen can be. But the scene itself is limitless where we can scroll to any portion of it that is not in the current frame. But please, avoid AJAX-based infinite scrolling patterns in your work for everyoneâs sake. Iâm also painting music on a screen thatâs as infinite as the canvas of a webpage. My recording software simply shows me a timeline and I paint sound on top of time, often layering multiple sounds at the same point in time â sound pictures, if you will. Thatâs simply one way to look at it. In some apps, itâs possible to view the canvas as movements that hold buckets of sound samples. Same thing with code. Authoring code is as likely to happen in a code editor you type into as it is to happen with a point-and-click setup in a visual interface that doesnât require touching any code at all (Dreamweaver, anyone?). Heck, the kids are even âvibeâ coding now without any awareness of how the code actually comes together. Or maybe youâre super low-fi and like to sketch your code before sitting behind a keyboard. Howâre people using it? Web developers be like all obsessed with how their work looks on whatever device someone is using. I know you know what Iâm talking about because you not only resize browsers to check responsiveness but probably also have tried opening your site (and others!) on a slew of different devices. â ïž Auto-playing media Itâs no different with sound. Iâve listened to each song Iâve recorded countless times because the way they sound varies from speaker to speaker. Thereâs one song in particular that I nearly scrapped because I struggled to get it sounding good on my AirPods Max headphones that are bass-ier than your typical speaker. I couldnât handle the striking difference between that and a different output source that might be more widely used, like car speakers. Will anyone actually listen to that song on a pair of AirPods Max headphones? Probably not. Then again, I donât know if anyone is viewing my sites on some screen built into their fridge or washing machine, but you donât see me rushing out to test that. I certainly do try to look at the sites I make on as many devices as possible to make sure nothing is completely busted. You canât control what device someone uses to look at a website. You canât control what speakers someone uses to listen to music. Thereâs a level of user experience and quality assurance that both fields share. Thereâs a whole other layer about accessibility and inclusive design that fits here as well. There is one big difference: The cringe of listening to your own voice. I never feel personally attached to the websites I make, but listening to my sounds takes a certain level of vulnerability and humility that I have to cope with. The creative process I mentioned it earlier, but I think the way music is created shares a lot of overlap with how websites are generally built. For example, a song rarely (if ever) comes fully formed. Most accounts I read of musicians discussing their creative process talk about the âmagicâ of a melody in which it pretty much falls in the writerâs lap. It often starts as the germ of an idea and it might take minutes, days, weeks, months, or even years to develop it into a comprehensive piece of work. I keep my phoneâs Voice Memos app at the ready so that Iâm able to quickly âsketchâ ideas that strike me in the moment. It might simply be something I hum into the phone. It could be strumming a few chords on the guitar that sound really nice together. Whatever it is, I like to think of those recordings as little low-fidelity sketches, not totally unlike sketching website layouts and content blocks with paper and pencil. Iâm partial to sketching websites on paper and pencil before jumping straight into code. Itâs go time! And, of course, thereâs what you do when itâs time to release your work. Iâm waist-deep in this part of the music and I can most definitely say that shipping an album has as many moving parts, if not more, than deploying a website. But they both require a lot of steps and dependencies that complicate the process. Itâs no exaggeration that Iâm more confused and lost about music publishing and distribution than I ever felt learning about publishing and deploying websites. Itâs perfectly understandable that someone might get lost when hosting a website. Thereâs so many ways to go about it, and the ârightâ way is shrouded in the cloak of âit dependsâ based on what youâre trying to accomplish. Well, same goes for music, apparently. Iâve signed up for a professional rights organization that establishes me as the owner of the recordings, very similar to how I need to register myself as the owner of a particular web domain. On top of that, Iâve enlisted the help of a distributor to make the songs available for anyone to hear and it is exactly the same concept as needing a host to distribute your website over the wire. I just wish I could programmatically push changes to my music catalog. Uploading and configuring the content for an album release reminds me so much of manually uploading hosted files with FTP. Nothing wrong with that, of course, but itâs certainly an opportunity to improve the developer recording experience. So, what? I guess what triggered this post is the realization that Iâve been in a self-made rut. Not a bad one, mind you, but more like being run by an automated script programmed to run efficiently in one direction. Working on a music project forced me into a new context where my development environment and paint brush of code are way less effective than what I need to get the job done. Itâs sort of like breaking out of the grid. My layout has been pretty fixed for some time and Iâm drawing new grid tracks that open my imagination up to a whole new way of work thatâs been right in front of me the entire time, but drowned in my horseradish ocean. Thereâs so much we can learn from other disciplines, be it music, painting, engineering, architecture, working on cars⊠turns out front-end development is like a lot of other things. So, whatâs your horseradish and what helps you look past it? Same Idea, Different Paint Brush originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Touring New CSS Features in Safari 26
- Articles
- browser
- news
- safari
Safari 26 adds:75 new features, 3 deprecations, and 171 other improvements. Here's all the CSS goodness you'll want to know about.
Touring New CSS Features in Safari 26 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
A couple of days ago, the Apple team released Safari 26.0! Is it a big deal? I mean, browsers release new versions all the time, where they sprinkle in a couple or few new features. They are, of course, all useful, but there arenât usually a lot of âbig leapsâ between versions. Safari 26 is different, though. It introduces a lot of new stuff. To be precise, it adds: 75 new features, 3 deprecations, and 171 other improvements. Iâd officially call that a big deal. The WebKit blog post does an amazing job breaking down each of the new (not only CSS) features. But again, there are so many that the new stuff coming to CSS deserves its own spotlight. So, today I want to check (and also try) what I think are the most interesting features coming to Safari. If you are like me and donât have macOS to test Safari, you can use Playwright instead. Whatâs new (to Safari)? Safari 26 introduces several features you may already know from prior Chrome releases. And⊠I canât blame Safari for seemingly lagging behind because Chrome is shipping new CSS at a scarily fast pace. I appreciate that browsers stagger releases so they can refine things against each other. Remember when Chrome initially shipped position-area as inset-area? We got better naming between the two implementations. I think what youâll find (as I did) that many of these overlapping features are part of the bigger effort towards Interop 2025, something WebKit is committed to. So, letâs look specifically at whatâs new in Safari 26⊠at least thatâs new to Safari. Anchor positioning Anchor positioning is one of my favorite features (I wrote the guide on it!), so I am so glad itâs arrived in Safari. We are now one step closer to widely available support which means weâre that much closer to using anchor positioning in our production work. With CSS Anchor Positioning, we can attach an absolutely-positioned element (that we may call a âtargetâ) to another element (that we may call an âanchorâ). This makes creating things like tooltips, modals, and pop-ups trivial in CSS, although it can be used for a variety of layouts. Using anchor positioning, we can attach any two elements, like these, together. It doesnât even matter where they are in the markup. <div class="anchor">anchor</div> <div class="target">target</div> Heads up: Even though the source order does not matter for positioning, it does for accessibility, so itâs a good idea to establish a relationship between the anchor and target using ARIA attributes for better experiences that rely on assistive tech. We register the .anchor element using the anchor-name property, which takes a dashed ident. We then use that ident to attach the .target to the .anchor using the position-anchor property. .anchor { anchor-name: --my-anchor; /* the ident */ } .target { position: absolute; position-anchor: --my-anchor; /* attached! */ } This positions the .target at the center of the .anchor â again, no matter the source order! If we want to position it somewhere else, the simplest way is using the position-area property. With position-area, we can define a region around the .anchor and place the .target in it. Think of it like drawing a grid of squares that are mapped to the .anchorâs center, top, right, bottom and left. For example, if we wish to place the target at the anchorâs top-right corner, we can write⊠.target { /* ... */ position-area: top right; } This is just a taste since anchor positioning is a world unto itself. Iâd encourage you to read our full guide on it. Scroll-driven animations Scroll-driven animations link CSS animations (created from @keyframes) to an elementâs scroll position. So instead of running an animation for a given time, the animation will depend on where the user scrolls. We can link an animation to two types of scroll-driven events: Linking the animation to a scrollable container using the scroll() function. Linking the animation to an elementâs position on the viewport using the view() function. Both of these functions are used inside the animation-timeline, which links the animation progress to the type of timeline weâre using, be it scroll or view. Whatâs the difference? With scroll(), the animation runs as the user scrolls the page. The simplest example is one of those reading bars that you might see grow as you read down the page. First, we define our everyday animation and add it to the bar element: @keyframes grow { from { transform: scaleX(0); } to { transform: scaleX(1); } } .progress { transform-origin: left center; animation: grow linear; } Note: I am setting transform-origin to left so it the animation progresses from the left instead of expanding from the center. Then, instead of giving the animation a duration, we can plug it into the scroll position like this: .progress { /* ... */ animation-timeline: scroll(); } Assuming youâre using Safari 26 or the latest version of Chrome, the bar grows in width from left to right as you scroll down the viewport. The view() function is similar, but it bases the animation on the elementâs position when it is in view of the viewport. That way, an animation can start or stop at specific points on the page. Hereâs an example making images âpopâ up as they enter view. @keyframes popup { from { opacity: 0; transform: translateY(100px); } to { opacity: 1; transform: translateY(0px); } } img { animation: popup linear; } Then, to make the animation progress as the element enters the viewport, we plug the animation-timeline to view(). img { animation: popup linear; animation-timeline: view(); } If we leave like this, though, the animation ends just as the element leaves the screen. The user doesnât see the whole thing! What we want is for the animation to end when the user is in the middle of the viewport so the full timeline runs in view. This is where we can reach for the animation-range property. It lets us set the animationâs start and end points relative to the viewport. In this specific example, letâs say I want the animation to start when the element enters the screen (i.e., the 0% mark) and finishes a little bit before it reaches the direct center of the viewport (weâll say 40%): img { animation: popup linear; animation-timeline: view(); animation-range: 0% 40%; } Once again, scroll-driven animations go way beyond these two basic examples. For a quick intro to all there is to them, I recommend Geoffâs notes. I feel safer using scroll-drive animations in my production work because itâs more of a progressive enhancement that wonât break an experience even if it is not supported by the browser. Even so, someone may prefer reduced (or no) animation at all, meaning weâd better progressively enhance it anyway with prefers-reduced-motion. The progress() function This is another feature we got in Chrome that has made its way to Safari 26. Funny enough, I missed it in Chrome when it released a few months ago, so it makes me twice as happy to see such a handy feature baked into two major browsers. The progress() function tells you how much a value has progressed in a range between a starting point and an ending point: progress(<value>, <start>, <end>) If the <value> is less than the <start>, the result is 0. If the <value> reaches the <end>, the result is 1. Anything in between returns a decimal between 0 and 1. Technically, this is something we can already do in a calc()-ulation: calc((value - start) / (end - start)) But thereâs a key difference! With progress(), we can calculate values from mixed data types (like adding px to rem), which isnât currently possible with calc(). For example, we can get the progress value formatted in viewport units from a numeric range formatted in pixels: progress(100vw, 400px, 1000px); âŠand it will return 0 when the viewport is 400px, and as the screen grows to 1000px, it progresses to 1. This means it can typecast different units into a number, and as a consequence, we can transition properties like opacity (which takes a number or percentage) based on the viewport (which is a distance length). Thereâs another workaround that accomplishes this using tan() and atan2() functions. I have used that approach before to create smooth viewport transitions. But progress() greatly simplifies the work, making it much more maintainable. Case in point: We can orchestrate multiple animations as the screen size changes. This next demo takes one of the demos I made for the article about tan() and atan2(), but swaps that out with progress(). Works like a charm! Thatâs a pretty wild example. Something more practical might be reducing an imageâs opacity as the screen shrinks: img { opacity: clamp(0.25, progress(100vw, 400px, 1000px), 1); } Go ahead and resize the demo to update the imageâs opacity, assuming youâre looking at it in Safari 26 or the latest version of Chrome. Iâve clamp()-ed the progress() between 0.25 and 1. But, by default, progress() already clamps the <value> between 0 and 1. According to the WebKit release notes, the current implementation isnât clamped by default, but upon testing, it does seem to be. So, if youâre wondering why Iâm clamping something thatâs supposedly clamped already, thatâs why. An unclamped version may come in the future, though. Self-alignment in absolute positioning And, hey, check this out! We can align-self and justify-self content inside absolutely-positioned elements. This isnât as big a deal as the other features weâve looked at, but it does have a handy use case. For example, I sometimes want to place an absolutely-positioned element directly in the center of the viewport, but inset-related properties (i.e., top, right, bottom, left, etc.) are relative to the elementâs top-left corner. That means we donât get perfectly centered with something like this as weâd expect: .absolutely-positioned { position: absolute; top: 50%; left: 50%; } From here, we could translate the element by half to get things perfectly centered. But now we have the center keyword supported by align-self and justify-self, meaning fewer moving pieces in the code: .absolutely-positioned { position: absolute; justify-self: center; } Weirdly enough, I noticed that align-self: center doesnât seem to center the element relative to the viewport, but instead relative to itself. I found out that can use the anchor-center value to center the element relative to its default anchor, which is the viewport in this specific example: .absolutely-positioned { position: absolute; align-self: anchor-center; justify-self: center; } And, of course, place-self is a shorthand for the align-self and justify-self properties, so we could combine those for brevity: .absolutely-positioned { position: absolute; place-self: anchor-center center; } Whatâs new (for the web)? Safari 26 isnât just about keeping up with Chrome. Thereâs a lot of exciting new stuff in here that weâre getting our hands on for the first time, or that is refined from other browser implementations. Letâs look at those features. The constrast-color() function The constrast-color() isnât new by any means. Itâs actually been in Safari Technology Preview since 2021 where it was originally called color-contrast(). In Safari 26, we get the updated naming as well as some polish. Given a certain color value, contrast-color() returns either white or black, whichever produces a sharper contrast with that color. So, if we were to provide coral as the color value for a background, we can let the browser decide whether the text color is more contrasted with the background as either white or black: h1 { --bg-color: coral; background-color: var(--bg-color); color: contrast-color(var(--bg-color)); } Our own Daniel Schwarz recently explored the contrast-color() function and found itâs actually not that great at determining the best contrast between colors: Undoubtedly, the number one shortcoming is that contrast-color() only resolves to either black or white. If you donât want black or white, well⊠that sucks. It sucks because there are cases where neither white nor black produces enough contrast with the provided color to meet WCAG color contrast guidelines. There is an intent to extend contrast-color() so it can return additional color values, but there still would be concerns about how exactly contrast-color() arrives at the âbestâ color, since we would still need to take into consideration the fontâs width, size, and even family. Always check the actual contrast! So, while itâs great to finally have constrat-color(), I do hope we see improvements added in the future. Pretty text wrapping Safari 26 also introduces text-wrap: pretty, which is pretty (get it?) straightforward: it makes paragraphs wrap in a prettier way. You may remember that Chrome shipped this back in 2023. But take notice that there is a pretty (OK, thatâs the last time) big difference between the implementations. Chrome only avoids typographic orphans (short last lines). Safari does more to prettify the way text wraps: Prevents short lines. Avoids single words at the end of the paragraph. Improves rag. Keeps each line relatively the same length. Reduces hyphenation. When enabled, hyphenation improves rag but also breaks words apart. In general, hyphenation should be kept to a minimum. The WebKit blog gets into much greater detail if youâre curious about what considerations they put into it. Safari takes additional actions to ensure âprettyâ text wrapping, including the overall ragging along the text. This is just the beginning! I think these are all the CSS features coming to Safari that you should look out for, but I donât want you to think they are the only features in the release. As I mentioned at the top, weâre talking about 75 new Web Platform features, including HDR Images, support for SVG favicons, logical property support for overflow properties, margin trimming, and much, much more. Itâs worth perusing the full release notes. Touring New CSS Features in Safari 26 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Recreating Gmailâs Google Gemini Animation
- Articles
- animation
John Rhea challenged himself to recreate the fancy button using the new CSS shape()
function sprinkled with animation to get things pretty close.
Recreating Gmailâs Google Gemini Animation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
I always see this Google Gemini button up in the corner in Gmail. When you hover over it, it does this cool animation where the little four-pointed star spins and the outer shape morphs between a couple different shapes that are also spinning. I challenged myself to recreate the button using the new CSS shape() function sprinkled with animation to get things pretty close. Let me walk you through it. Drawing the Shapes Breaking it down, we need five shapes in total: Four-pointed star Flower-ish thing (yes, thatâs the technical term) Cylinder-ish thing (also the correct technical term) Rounded hexagon Circle I drew these shapes in a graphics editing program (I like Affinity Designer, but any app that lets you draw vector shapes should work), outputted them in SVG, and then used a tool, like Temani Afifâs generator, to translate the SVG paths the program generated to the CSS shape() syntax. Now, before I exported the shapes from Affinity Designer, I made sure the flower, hexagon, circle, and cylinder all had the same number of anchor points. If they donât have the same number, then the shapes will jump from one to the next and wonât do any morphing. So, letâs use a consistent number of anchor points in each shape â even the circle â and we can watch these shapes morph into each other. I set twelve anchor points on each shape because that was the highest amount used (the hexagon had two points near each curved corner). Something related (and possibly hard to solve, depending on your graphics program) is that some of my shapes were wildly contorted when animating between shapes. For example, many shapes became smaller and began spinning before morphing into the next shape, while others were much more seamless. I eventually figured out that the interpolation was matching each shapeâs starting point and continued matching points as it followed the shape. The result is that the matched points move between shapes, so if the starting point for one shape is on opposite side of the starting point of the second shape, a lot of movement is necessary to transition from one shapeâs starting point to the next shapeâs starting point. Luckily, the circle was the only shape that gave me trouble, so I was able to spin it (with some trial and error) until its starting point more closely matched the other starting points. Another issue I ran into was that the cylinder-ish shape had two individual straight lines in shape() with line commands rather than using the curve command. This prevented the animation from morphing into the next shape. It immediately snapped to the next image without animating the transition, skipping ahead to the next shape (both when going into the cylinder and coming out of it). I went back into Affinity Designer and ever-so-slightly added curvature to the two lines, and then it morphed perfectly. I initially thought this was a shape() quirk, but the same thing happened when I attempted the animation with the path() function, suggesting itâs more an interpolation limitation than it is a shape() limitation. Once I finished adding my shape() values, I defined a CSS variable for each shape. This makes the later uses of each shape() more readable, not to mention easier to maintain. With twelve lines per shape the code is stinkinâ long (technical term) so weâve put it behind an accordion menu. View Shape Code :root { --hexagon: shape( evenodd from 6.47% 67.001%, curve by 0% -34.002% with -1.1735% -7.7% / -1.1735% -26.302%, curve by 7.0415% -12.1965% with 0.7075% -4.641% / 3.3765% -9.2635%, curve by 29.447% -17.001% with 6.0815% -4.8665% / 22.192% -14.1675%, curve by 14.083% 0% with 4.3725% -1.708% / 9.7105% -1.708%, curve by 29.447% 17.001% with 7.255% 2.8335% / 23.3655% 12.1345%, curve by 7.0415% 12.1965% with 3.665% 2.933% / 6.334% 7.5555%, curve by 0% 34.002% with 1.1735% 7.7% / 1.1735% 26.302%, curve by -7.0415% 12.1965% with -0.7075% 4.641% / -3.3765% 9.2635%, curve by -29.447% 17.001% with -6.0815% 4.8665% / -22.192% 14.1675%, curve by -14.083% 0% with -4.3725% 1.708% / -9.7105% 1.708%, curve by -29.447% -17.001% with -7.255% -2.8335% / -23.3655% -12.1345%, curve by -7.0415% -12.1965% with -3.665% -2.933% / -6.334% -7.5555%, close ); --flower: shape( evenodd from 17.9665% 82.0335%, curve by -12.349% -32.0335% with -13.239% -5.129% / -18.021% -15.402%, curve by -0.0275% -22.203% with -3.1825% -9.331% / -3.074% -16.6605%, curve by 12.3765% -9.8305% with 2.3835% -4.3365% / 6.565% -7.579%, curve by 32.0335% -12.349% with 5.129% -13.239% / 15.402% -18.021%, curve by 20.4535% -0.8665% with 8.3805% -2.858% / 15.1465% -3.062%, curve by 11.58% 13.2155% with 5.225% 2.161% / 9.0355% 6.6475%, curve by 12.349% 32.0335% with 13.239% 5.129% / 18.021% 15.402%, curve by 0.5715% 21.1275% with 2.9805% 8.7395% / 3.0745% 15.723%, curve by -12.9205% 10.906% with -2.26% 4.88% / -6.638% 8.472%, curve by -32.0335% 12.349% with -5.129% 13.239% / -15.402% 18.021%, curve by -21.1215% 0.5745% with -8.736% 2.9795% / -15.718% 3.0745%, curve by -10.912% -12.9235% with -4.883% -2.2595% / -8.477% -6.6385%, close ); --cylinder: shape( evenodd from 10.5845% 59.7305%, curve by 0% -19.461% with -0.113% -1.7525% / -0.11% -18.14%, curve by 10.098% -26.213% with 0.837% -10.0375% / 3.821% -19.2625%, curve by 29.3175% -13.0215% with 7.2175% -7.992% / 17.682% -13.0215%, curve by 19.5845% 5.185% with 7.1265% 0% / 13.8135% 1.887%, curve by 9.8595% 7.9775% with 3.7065% 2.1185% / 7.035% 4.8195%, curve by 9.9715% 26.072% with 6.2015% 6.933% / 9.4345% 16.082%, curve by 0% 19.461% with 0.074% 1.384% / 0.0745% 17.7715%, curve by -13.0065% 29.1155% with -0.511% 11.5345% / -5.021% 21.933%, curve by -26.409% 10.119% with -6.991% 6.288% / -16.254% 10.119%, curve by -20.945% -5.9995% with -7.6935% 0% / -14.8755% -2.199%, curve by -8.713% -7.404% with -3.255% -2.0385% / -6.1905% -4.537%, curve by -9.7575% -25.831% with -6.074% -6.9035% / -9.1205% -15.963%, close ); --star: shape( evenodd from 50% 24.787%, curve by 7.143% 18.016% with 0% 0% / 2.9725% 13.814%, curve by 17.882% 7.197% with 4.171% 4.2025% / 17.882% 7.197%, curve by -17.882% 8.6765% with 0% 0% / -13.711% 4.474%, curve by -7.143% 16.5365% with -4.1705% 4.202% / -7.143% 16.5365%, curve by -8.6115% -16.5365% with 0% 0% / -4.441% -12.3345%, curve by -16.4135% -8.6765% with -4.171% -4.2025% / -16.4135% -8.6765%, curve by 16.4135% -7.197% with 0% 0% / 12.2425% -2.9945%, curve by 8.6115% -18.016% with 4.1705% -4.202% / 8.6115% -18.016%, close ); --circle: shape( evenodd from 13.482% 79.505%, curve by -7.1945% -12.47% with -1.4985% -1.8575% / -6.328% -10.225%, curve by 0.0985% -33.8965% with -4.1645% -10.7945% / -4.1685% -23.0235%, curve by 6.9955% -12.101% with 1.72% -4.3825% / 4.0845% -8.458%, curve by 30.125% -17.119% with 7.339% -9.1825% / 18.4775% -15.5135%, curve by 13.4165% 0.095% with 4.432% -0.6105% / 8.9505% -0.5855%, curve by 29.364% 16.9% with 11.6215% 1.77% / 22.102% 7.9015%, curve by 7.176% 12.4145% with 3.002% 3.7195% / 5.453% 7.968%, curve by -0.0475% 33.8925% with 4.168% 10.756% / 4.2305% 22.942%, curve by -7.1135% 12.2825% with -1.74% 4.4535% / -4.1455% 8.592%, curve by -29.404% 16.9075% with -7.202% 8.954% / -18.019% 15.137%, curve by -14.19% -0.018% with -4.6635% 0.7255% / -9.4575% 0.7205%, curve by -29.226% -16.8875% with -11.573% -1.8065% / -21.9955% -7.9235%, close ); } If all that looks like gobbledygook to you, it largely does to me too (and I wrote the shape() Almanac entry). As I said above, I converted them from stuff I drew to shape()s with a tool. If you can recognize the shapes from the custom property names, then youâll have all you need to know to keep following along. Breaking Down the Animation After staring at the Gmail animation for longer than I would like to admit, I was able to recognize six distinct phases: First, on hover: The four-pointed star spins to the right and changes color. The fancy blue shape spreads out from underneath the star shape. The fancy blue shape morphs into another shape while spinning. The purplish color is wiped across the fancy blue shape. Then, after hover: The fancy blue shape contracts (basically the reverse of Phase 2). The four-pointed star spins left and returns to its initial color (basically the reverse of Phase 1). Thatâs the run sheet weâre working with! Weâll write the CSS for all that in a bit, but first Iâd like to set up the HTML structure that weâre hooking into. The HTML Iâve always wanted to be one of those front-enders who make jaw-dropping art out of CSS, like illustrating the Sistine chapel ceiling with a single div (cue someone commenting with a CodePen doing just that). But, alas, I decided I needed two divs to accomplish this challenge, and I thank you for looking past my shame. To those of you who turned up your nose and stopped reading after that admission: I can safely call you a Flooplegerp and youâll never know it. (To those of you still with me, I donât actually know what a Flooplegerp is. But Iâm sure itâs bad.) Because the animation needs to spread out the blue shape from underneath the star shape, they need to be two separate shapes. And we canât shrink or clip the main element to do this because that would obscure the star. So, yeah, thatâs why Iâm reaching for a second div: to handle the fancy shape and how it needs to move and interact with the star shape. <div id="geminianimation"> <div></div> </div> The Basic CSS Styling Each shape is essentially defined with the same box with the same dimensions and margin spacing. #geminianimation { width: 200px; aspect-ratio: 1/1; margin: 50px auto; position: relative; } We can clip the box to a particular shape using a pseudo-element. For example, letâs clip a star shape using the CSS variable (--star) we defined for it and set a background color on it: #geminianimation { width: 200px; aspect-ratio: 1; margin: 50px auto; position: relative; &::before { content: ""; clip-path: var(--star); width: 100%; height: 100%; position: absolute; background-color: #494949; } } We can hook into the containerâs child div and use it to establish the animationâs starting shape, which is the flower (clipped with our --flower variable): #geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); } What we get is a star shape stacked right on top of a flower shape. Weâre almost done with our initial CSS, but in order to recreate the animated color wipes, we need a much larger surface that allows us to âchangeâ colors by moving the background gradientâs position. Letâs move the gradient so that it is declared on a pseudo element instead of the child div, and size it up by 400% to give us additional breathing room. #geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); &::after { content: ""; background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); width: 400%; height: 400%; position: absolute; } } Now we can clearly see how the shapes are positioned relative to each other: Animating Phases 1 and 6 Now, Iâll admit, in my own hubris, Iâve turned up my very own schnoz at the humble transition property because my thinking is typically, Transitions are great for getting started in animation and for quick things, but real animations are done with CSS keyframes. (Perhaps I, too, am a Flooplegerp.) But now I see the error of my ways. I can write a set of keyframes that rotate the star 180 degrees, turn its color white(ish), and have it stay that way for as long as the element is hovered. What I canât do is animate the star back to what it was when the element is un-hovered. I can, however, do that with the transition property. To do this, we add transition: 1s ease-in-out; on the ::before, adding the new background color and rotating things on :hover over the #geminianimation container. This accounts for the first and sixth phases of the animation we outlined earlier. #geminianimation { &::before { /* Existing styles */ transition: 1s ease-in-out; } &:hover { &::before { transform: rotate(180deg); background-color: #FAFBFE; } } } Animating Phases 2 and 5 We can do something similar for the second and fifth phases of the animation since they are mirror reflections of each other. Remember, in these phases, weâre spreading and contracting the fancy blue shape. We can start by shrinking the inner divâs scale to zero initially, then expand it back to its original size (scale: 1) on :hover (again using transitions): #geminianimation { div { scale: 0; transition: 1s ease-in-out; } &:hover { div { scale: 1; } } Animating Phase 3 Now, we very well could tackle this with a transition like we did the last two sets, but we probably should not do it⊠that is, unless you want to weep bitter tears and curse the day you first heard of CSS⊠not that I know from personal experience or anything⊠ha ha⊠ha. CSS keyframes are a better fit here because there are multiple states to animate between that would require defining and orchestrating several different transitions. Keyframes are more adept at tackling multi-step animations. What weâre basically doing is animating between different shapes that weâve already defined as CSS variables that clip the shapes. The browser will handle interpolating between the shapes, so all we need is to tell CSS which shape we want clipped at each phase (or âsectionâ) of this set of keyframes: @keyframes shapeshift { 0% { clip-path: var(--circle); } 25% { clip-path: var(--flower); } 50% { clip-path: var(--cylinder); } 75% { clip-path: var(--hexagon); } 100% { clip-path: var(--circle); } } Yes, we could combine the first and last keyframes (0% and 100%) into a single step, but weâll need them separated in a second because we also want to animate the rotation at the same time. Weâll set the initial rotation to 0turn and the final rotation 1turn so that it can keep spinning smoothly as long as the animation is continuing: @keyframes shapeshift { 0% { clip-path: var(--circle); rotate: 0turn; } 25% { clip-path: var(--flower); } 50% { clip-path: var(--cylinder); } 75% { clip-path: var(--hexagon); } 100% { clip-path: var(--circle); rotate: 1turn; } } Note: Yes, turn is indeed a CSS unit, albeit one that often goes overlooked. We want the animation to be smooth as it interpolates between shapes. So, Iâm setting the animationâs timing function with ease-in-out. Unfortunately, this will also slow down the rotation as it starts and ends. However, because weâre both beginning and ending with the circle shape, the fact that the rotation slows coming out of 0% and slows again as it heads into 100% is not noticeable â a circle looks like a circle no matter its rotation. If we were ending with a different shape, the easing would be visible and I would use two separate sets of keyframes â one for the shape-shift and one for the rotation â and call them both on the #geminianimation child div . #geminianimation:hover { div { animation: shapeshift 5s ease-in-out infinite forwards; } } Animating Phase 4 That said, we still do need one more set of keyframes, specifically for changing the shapeâs color. Remember how we set a linear gradient on the parent containerâs ::after pseudo, then we also increased the pseudoâs width and height? Hereâs that bit of code again: #geminianimation div { width: 100%; height: 100%; clip-path: var(--flower); &::after { content: ""; background: linear-gradient(135deg, #217bfe, #078efb, #ac87eb, #217bfe); width: 400%; height: 400%; position: absolute; } } The gradient is that large because weâre only showing part of it at a time. And that means we can translate the gradientâs position to move the gradient at certain keyframes. 400% can be nicely divided into quarters, so we can move the gradient by, say, three-quarters of its size. Since its parent, the #geminianimation div, is already spinning, we donât need any fancy movements to make it feel like the color is coming from different directions. We just translate it linearly and the spin adds some variability to what direction the color wipe comes from. @keyframes gradientMove { 0% { translate: 0 0; } 100% { translate: -75% -75%; } } One final refinement Instead of using the flower as the default shape, letâs change it to circle. This smooths things out when the hover interaction causes the animation to stop and return to its initial position. And there you have it: Wrapping up We did it! Is this exactly how Google accomplished the same thing? Probably not. In all honesty, I never inspected the animation code because I wanted to approach it from a clean slate and figure out how I would do it purely in CSS. Thatâs the fun thing about a challenge like this: there are different ways to accomplish the same thing (or something similar), and your way of doing it is likely to be different than mine. Itâs fun to see a variety of approaches. Which leads me to ask: How would you have approached the Gemini button animation? What considerations would you take into account that maybe I havenât? Recreating Gmailâs Google Gemini Animation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Typed Arithmetic
- Articles
- animation
- CSS functions
- math
Starting in Chrome 140, we'll be able to calculate numeric values with mixed data types. Sounds small, but Amit demonstrates how big a deal this is, calling it Computational CSS.
CSS Typed Arithmetic originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS typed arithmetic is genuinely exciting! It opens the door to new kinds of layout composition and animation logic we could only hack before. The first time I published something that leaned on typed arithmetic was in this animation: But before we dive into what is happening in there, letâs pause and get clear on what typed arithmetic actually is and why it matters for CSS. Browser Support: The CSS feature discussed in this article, typed arithmetic, is on the cutting edge. As of the time of writing, browser support is very limited and experimental. To ensure all readers can understand the concepts, the examples throughout this article are accompanied by videos and images, demonstrating the results for those whose browsers do not yet support this functionality. Please check resources like MDN or Can I Use for the latest support status. The Types If you really want to get what a âtypeâ is in CSS, think about TypeScript. Now forget about TypeScript. This is a CSS article, where semantics actually matter. In CSS, a type describes the unit space a value lives in, and is called a data-type. Every CSS value belongs to a specific type, and each CSS property and function only accepts the data type (or types) it expects. Properties like opacity or scale use a plain <number> with no units. width, height, other box metrics, and many additional properties use <length> units like px, rem, cm, etc. Functions like rotate() or conic-gradient() use an <angle> with deg, rad, or turn. animation and transition use <time> for their duration in seconds (s) or milliseconds (ms). Note: You can identify CSS data types in the specs, on MDN, and other official references by their angle brackets: <data-type>. There are many more data types like <percentage>, <frequency>, and <resolution>, but the types mentioned above cover most of our daily use cases and are all we will need for our discussion today. The mathematical concept remains the same for (almost) all types. I say âalmostâ all types for one reason: not every data type is calculable. For instance, types like <color>, <string>, or <image> cannot be used in mathematical operations. An expression like "foo" * red would be meaningless. So, when we discuss mathematics in general, and typed arithmetic in particular, it is crucial to use types that are inherently calculable, like <length>, <angle>, or <number>. The Rules of Typed Arithmetic Even when we use calculable data types, there are still limitations and important rules to keep in mind when performing mathematical operations on them. Addition and Subtraction Sadly, a mix-and-match approach doesnât really work here. Expressions like calc(3em + 45deg) or calc(6s - 3px) will not produce a logical result. When adding or subtracting, you must stick to the same data type. Of course, you can add and subtract different units within the same type, like calc(4em + 20px) or calc(300deg - 1rad). Multiplication With multiplication, you can only multiply by a plain <number> type. For example: calc(3px * 7), calc(10deg * 6), or calc(40ms * 4). The result will always adopt the type and unit of the first value, with the new value being the product of the multiplication. But why can you only multiply by a number? If we tried something like calc(10px * 10px) and assumed it followed âregularâ math, we would expect a result of 100pxÂČ. However, there are no squared pixels in CSS, and certainly no square degrees (though that could be interestingâŠ). Because such a result is invalid, CSS only permits multiplying typed values by unitless numbers. Division Here, too, mixing and matching incompatible types is not allowed, and you can divide by a number just as you can multiply a number. But what happens when you divide a type by the same type? Hint: this is where things get interesting. Again, if we were thinking in terms of regular math, we would expect the units to cancel each other out, leaving only the calculated value. For example, 90x / 6x = 15. In CSS, however, this isnât the case. Sorry, it wasnât the case. Previously, an expression like calc(70px / 10px) would have been invalid. But starting with Safari 18.2 and Chrome 140 (and hopefully soon in all other browsers), this expression now returns a valid number, which winds up being 7 in this case. This is the major change that typed arithmetic enables. Is that all?! That little division? Is that the big thing I called âgenuinely excitingâ? Yes! Because this one little feature opens the door to a world of creative possibilities. Case in point: we can convert values from one data type to another and mathematically condition values of one type based on another, just like in the swirl example I demoed at the top. So, to understand what is happening there, letâs look at a more simplified swirl: I have a container<div> with 36 <i> elements in the markup that are arranged in a spiral with CSS. Each element has an angle relative to the center point, rotate(var(--angle)), and a distance from that center point, translateX(var(--distance)). The angle calculation is quite direct. I take the index of each <i> element using sibling-index() and multiply it by 10deg. So, the first element with an index of 1 will be rotated by 10 degrees (1 * 10deg), the second by 20 degrees (2 * 10deg), the third by 30 degrees (3 * 10deg), and so on. i { --angle: calc(sibling-index() * 10deg); } As for the distance, I want it to be directly proportional to the angle. I first use typed arithmetic to divide the angle by 360 degrees: var(--angle) / 360deg. This returns the angleâs value, but as a unitless number, which I can then use anywhere. In this case, I can multiply it by a <length> value (e.g. 180px) that determines the elementâs distance from the center point. i { --angle: calc(sibling-index() * 10deg); --distance: calc(var(--angle) / 360deg * 180px); } This way, the ratio between the angle and the distance remains constant. Even if we set the angle of each element differently, or to a new value, the elements will still align on the same spiral. The Importance of the Divisorâs Unit Itâs important to clarify that when using typed arithmetic this way, you get a unitless number, but its value is relative to the unit of the divisor. In our simplified spiral, we divided the angle by 360deg. The resulting unitless number, therefore, represents the value in degrees. If we had divided by 1turn instead, the result would be completely different â even though 1turn is equivalent to 360deg, the resulting unitless number would represent the value in turns. A clearer example can be seen with <length> values. Letâs say we are working with a screen width of 1080px. If we divide the screen width (100vw) by 1px, we get the number of pixels that fit into the screen width, which is, of course, 1080. calc(100vw / 1px) /* 1080 */ However, if we divide that same width by 1em (and assume a font size of 16px), we get the number of em units that fit across the screen. calc(100vw / 1em) /* 67.5 */ The resulting number is unitless in both cases, but its meaning is entirely dependent on the unit of the value we divided by. From Length to Angle Of course, this conversion doesnât have to be from a type <angle> to a type <length>. Here is an example that calculates an elementâs angle based on the screen width (100vw), creating a new and unusual kind of responsiveness. And get this: There are no media queries in here! itâs all happening in a single line of CSS doing the calculations. To determine the angle, I first define the width range I want to work within. clamp(300px, 100vw, 700px) gives me a closed range of 400px, from 300px to 700px. I then subtract 700px from this range, which gives me a new range, from -400px to 0px. Using typed arithmetic, I then divide this range by 400px, which gives me a normalized, unitless number between -1 and 0. And finally, I convert this number into an <angle> by multiplying it by -90deg. Hereâs what that looks like in CSS when we put it all together: p { rotate: calc(((clamp(300px, 100vw, 700px) - 700px) / 400px) * -90deg); } From Length to Opacity Of course, the resulting unitless number can be used as-is in any property that accepts a <number> data type, such as opacity. What if I want to determine the fontâs opacity based on its size, making smaller fonts more opaque and therefore clearer? Is it possible? Absolutely. In this example, I am setting a different font-size value for each <p> element using a --font-size custom property. and since the range of this variable is from 0.8rem to 2rem, I first subtract 0.8rem from it to create a new range of 0 to 1.2rem. I could divide this range by 1.2rem to get a normalized, unitless value between 0 and 1. However, because I donât want the text to become fully transparent, I divide it by twice that amount (2.4rem). This gives me a result between 0 and 0.5, which I then subtract from the maximum opacity of 1. p { font-size: var(--font-size, 1rem); opacity: calc(1 - (var(--font-size, 1rem) - 0.8rem) / 2.4rem); } Notice that I am displaying the font size in pixel units even though the size is defined in rem units. I simply use typed arithmetic to divide the font size by 1px, which gives me the size in pixels as a unitless value. I then inject this value into the content of the the paragraphâs ::after pseudo-element. p::after { counter-reset: px calc(var(--font-size, 1rem) / 1px); content: counter(px) 'px'; } Dynamic Width Colors Of course, the real beauty of using native CSS math functions, compared to other approaches, is that everything happens dynamically at runtime. Here, for example, is a small demo where I color the elementâs background relative to its rendered width. p { --hue: calc(100cqi / 1px); background-color: hsl(var(--hue, 0) 75% 25%); } You can drag the bottom-right corner of the element to see how the color changes in real-time. Hereâs something neat about this demo: because the elementâs default width is 50% of the screen width and the color is directly proportional to that width, itâs possible that the element will initially appear in completely different colors on different devices with different screens. Again, this is all happening without any media queries or JavaScript. An Extreme Example: Chaining Conversions OK, so weâve established that typed arithmetic is cool and opens up new and exciting possibilities. Before we put a bow on this, I wanted to pit this concept against a more extreme example. I tried to imagine what would happen if we took a <length> type, converted it to a <number> type, then to an <angle> type, back to a <number> type, and, from there, back to a <length> type. Phew! I couldnât find a real-world use case for such a chain, but I did wonder what would happen if we were to animate an elementâs width and use that width to determine the height of something else. All the calculations might not be necessary (maybe?), but I think I found something that looks pretty cool. In this demo, the animation is on the solid line along the bottom. The vertical position of the ball, i.e. its height, relative to the line, is proportional to the lineâs width. So, as the line expands and contracts, so does the path of the bouncing ball. To create the parabolic arc that the ball moves along, I take the elementâs width (100cqi) and, using typed arithmetic, divide it by 300px to get a unitless number between 0 and 1. I multiply that by 180deg to get an angle that I use in a sin() function (Juan Diego has a great article on this), which returns another unitless number between 0 and 1, but with a parabolic distribution of values. Finally, I multiply this number by -200px, which outputs the ballâs vertical position relative to the line. .ball { --translateY: calc(sin(calc(100cqi / 300px) * 180deg) * -200px) ; translate: -50% var(--translateY, 0); } And again, because the ballâs position is relative to the lineâs width, the ballâs position will remain on the same arc, no matter how we define that width. Wrapping Up: The Dawn of Computational CSS The ability to divide one typed value by another to produce a unitless number might seem like no big deal; more like a minor footnote in the grand history of CSS. But as weâve seen, this single feature is a quiet revolution. It dismantles the long-standing walls between different CSS data types, transforming them from isolated silos into a connected, interoperable system. Weâve moved beyond simple calculations, and entered the era of true Computational CSS. This isnât just about finding new ways to style a button or animate a loading spinner. It represents a fundamental shift in our mental model. We are no longer merely declaring static styles, but rather defining dynamic, mathematical relationships between properties. The width of an element can now intrinsically know about its color, an angle can dictate a distance, and a fontâs size can determine its own visibility. This is CSS becoming self-aware, capable of creating complex behaviors and responsive designs that adapt with a precision and elegance that previously required JavaScript. So, the next time you find yourself reaching for JavaScript to bridge a gap between two CSS properties, pause for a moment. Ask yourself if thereâs a mathematical relationship you can define instead. You might be surprised at how far you can go with just a few lines of CSS. The Future is Calculable The examples in this article are just the first steps into a much larger world. What happens when we start mixing these techniques with scroll-driven animations, view transitions, and other modern CSS features? The potential for creating intricate data visualizations, generative art, and truly fluid user interfaces, all natively in CSS, is immense. We are being handed a new set of creative tools, and the instruction manual is still being written. CSS Typed Arithmetic originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
On inclusive personas and inclusive user research
- Notes
- accessibility
- UI/IX Design
A set of notes taken from Eric Bailey's article about the use of inclusive personas and user research.
On inclusive personas and inclusive user research originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Iâm inclined to take a few notes on Eric Baileyâs grand post about the use of inclusive personas in user research. As someone who has been in roles that have both used and created user personas, thereâs so much in here Whatâs the big deal, right? Weâre often taught and encouraged to think about users early in the design process. Itâs userâ centric design, so letâs personify 3-4 of the people we think represent our target audiences so our work is aligned with their objectives and needs. My masterâs program was big on that and went deep into different approaches, strategies, and templates for documenting that research. And, yes, it is research. The idea, in theory, is that by understanding the motivations and needs of specific users (gosh, isnât âusersâ an awkward term?), we can âdesign backwardsâ so that the end goal is aligned to actions that get them there. Eric sees holes in that process, particularly when it comes to research centered around inclusiveness. Why is that? Very good reasons that Iâm compiling here so I can reference it later. Thereâs a lot to take in, so youâd do yourself a solid by reading Ericâs post in full. Your takeaways may be different than mine. Traditional vs. Inclusive user research First off, I love how Eric distinguishes what we typically refer to as the general type of user personas, like the ones I made to generalize an audience, from inclusive user personas that are based on individual experiences. Inclusive user research practices are different than a lot of traditional user research. While there is some high-level overlap in approach, know the majority of inclusive user research is more focused on the individual experience and less about more general trends of behavior. So, right off the bat we have to reframe what weâre talking about. Thereâs blanket personas that are placeholders for abstracting what we think we know about specific groups of people versus individual people that represent specific experiences that impact usability and access to content. A primary goal in inclusive user research is often to identify concrete barriers that prevent someone from accessing the content they want or need. While the techniques people use are varied, these barriers represent insurmountable obstacles that stymie a whole host of navigation techniques and approaches. If youâre looking for patterns, trends, and customer insights, know that what you want is regular user testing. Here, know that the same motivating factors youâre looking to uncover also exist for disabled people. This is because theyâre also, you know, people. Assistive technology is not exclusive to disabilities Itâs so easy to assume that using assistive tools automatically means accommodating a disability or impairment, but thatâs not always the case. Choice points from Eric: First is that assistive technology is a means, and not an end. Some disabled people use more than one form of assistive technology, both concurrently and switching them in and out as needed. Some disabled people donât use assistive technology at all. Not everyone who uses assistive technology has also mastered it. Disproportionate attention placed on one kind of assistive technology at the expense of others. Itâs entirely possible to have a solution that is technically compliant, yet unintuitive or near-impossible to use in the actual. I like to keep in mind that assistive technologies are for everyone. I often think about examples in the physical world where everyone benefits from an accessibility enhancement, such as cutting curbs in sidewalks (great for skateboarders!), taking elevators (you donât have to climb stairs in some cases), and using TV subtitles (I often have to keep the volume low for sleeping kids). Thatâs the inclusive part of this. Everyone benefits rather than a specific subset of people. Different personas, different priorities What happens when inclusive research is documented separately from general user research? Another folly of inclusive personas is that theyâre decoupled from regular personas. This means theyâre easily dismissible as considerations. [âŠ] Disability is diversity, and the plain and honest truth is that diversity is missing from your personas if disability conditions are not present in at least some of them. This, in turn, means your personas are misrepresentative of the people in the abstract you claim to serve. In practice, that means: [âŠ] we also want to hold space for things that need direct accessibility support and remediation when this consideration of accessibility fails to happen. Itâs all about approach. An example of how to consider your approach is when adding drag and drop support to an experience. [âŠ] [W]e want to identify if drag and drop is even needed to achieve the outcome the organization needs. Thinking of a slick new feature that will impress your users? Great! Letâs make sure it doesnât step on the toes of other experiences in the process, because thatâs antithetical to inclusiveness. I recognize this temptation in my own work, particularly if I land on a novel UI pattern that excites me. The excitement and tickle I get from a âcleverâ idea gives me a blind side to evaluating the overall effectiveness of it. Radical participatory design Gosh dang, why didnât my schoolwork ever cover this! I had to spend a little time reading the Cambridge University Press article explaining radical participatopry design (RPD) that Eric linked up. Therefore, we introduce the term RPD to differentiate and represent a type of PD that is participatory to the root or core: full inclusion as equal and full members of the research and design team. Unlike other uses of the term PD, RPD is not merely interaction, a method, a way of doing a method, nor a methodology. It is a meta-methodology, or a way of doing a methodology. Ah, a method for methodology! Weâre talking about not only including community members into the internal design process, but make them equal stakeholders as well. They get the power to make decisions, something the articleâs author describes as a form of decolonization. Or, as Eric nicely describes it: Existing power structures are flattened and more evenly distributed with this approach. Bonus points for surfacing the model minority theory: The term âmodel minorityâ describes a minority group that society regards as high-performing and successful, especially when compared to other groups. The narrative paints Asian American children as high-achieving prodigies, with fathers who practice medicine, science, or law and fierce mothers who force them to work harder than their classmates and hold them to standards of perfection. It introduces exclusiveness in the quest to pursue inclusiveness â a stereotype within a stereotype. Thinking bigger Eric caps things off with a great compilation of actionable takeaways for avoiding the pitfalls of inclusive user personas: Letting go of control leads to better outcomes. Member checking: letting participants review, comment on, and correct the content youâve created based on their input. Take time to scrutinize the functions of our roles and how our organizations compel us to undertake them in order to be successful within them. Organizations can turn inwards and consider the artifacts their existing design and research processes produce. They can then identify opportunities for participants to provide additional clarity and corrections along the way. On inclusive personas and inclusive user research originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Is it Time to Un-Sass?
- Articles
- css preprocessors
Many of the Sass features we've grown to love have made their way into native CSS in some shape or form. So, should we still use Sass? This is how developer Jeff Bridgforth is thinking about it.
Is it Time to Un-Sass? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Several weeks ago, I participated in Front End Study Hall. Front End Study Hall is an HTML and CSS focused meeting held on Zoom every two weeks. It is an opportunity to learn from one another as we share our common interest in these two building blocks of the Web. Some weeks, there is more focused discussion while other weeks are more open ended and members will ask questions or bring up topics of interest. Joe, the moderator of the group, usually starts the discussion with something he has been thinking about. In this particular meeting, he asked us about Sass. He asked us if we used it, if we liked it, and then to share our experience with it. I had planned to answer the question but the conversation drifted into another topic before I had the chance to answer. I saw it as an opportunity to write and to share some of the things that I have been thinking about recently. Beginnings I started using Sass in March 2012. I had been hearing about it through different things I read. I believe I heard Chris Coyier talk about it on his then-new podcast, ShopTalk Show. I had been interested in redesigning my personal website and I thought it would be a great chance to learn Sass. I bought an e-book version of Pragmatic Guide to Sass and then put what I was learning into practice as I built a new version of my website. The book suggested using Compass to process my Sass into CSS. I chose to use SCSS syntax instead of indented syntax because SCSS was similar to plain CSS. I thought it was important to stay close to the CSS syntax because I might not always have the chance to use Sass, and I wanted to continue to build my CSS skills. It was very easy to get up and running with Sass. I used a GUI tool called Scout to run Compass. After some frustration trying to update Ruby on my computer, Scout gave me an environment to get up and going quickly. I didnât even have to use the command line. I just pressed âPlayâ to tell my computer to watch my files. Later I learned how to use Compass through the command line. I liked the simplicity of that tool and wish that at least one of todayâs build tools incorporated that same simplicity. I enjoyed using Sass out of the gate. I liked that I was able to create reusable variables in my code. I could set up colors and typography and have consistency across my code. I had not planned on using nesting much but after I tried it, I was hooked. I really liked that I could write less code and manage all the relationships with nesting. It was great to be able to nest a media query inside a selector and not have to hunt for it in another place in my code. Fast-forward a bit⊠After my successful first experience using Sass in a personal project, I decided to start using it in my professional work. And I encouraged my teammates to embrace it. One of the things I liked most about Sass was that you could use as little or as much as you liked. I was still writing CSS but now had the superpower that the different helper functions in Sass enabled. I did not get as deep into Sass as I could have. I used the Sass @extend rule more in the beginning. There are a lot of features that I did not take advantage of, like placeholder selectors and for loops. I have never been one to rely much on shortcuts. I use very few of the shortcuts on my Mac. I have dabbled in things like Emmet but tend to quickly abandon them because I am just use to writing things out and have not developed the muscle memory of using shortcuts. Is it time to un-Sass? By my count, I have been using Sass for over 13 years. I chose Sass over Less.js because I thought it was a better direction to go at the time. And my bet paid off. That is one of the difficult things about working in the technical space. There are a lot of good tools but some end up rising to the top and others fall away. I have been pretty fortunate that most of the decisions I have made have gone the way that they have. All the agencies I have worked for have used Sass. At the beginning of this year, I finally jumped into building a prototype for a personal project that I have been thinking about for years: my own memory keeper. One of the few things that I liked about Facebook was the Memories feature. I enjoyed visiting that page each day to remember what I had been doing on that specific day in years past. But I felt at times that Facebook was not giving me all of my memories. And my life doesnât just happen on Facebook. I also wanted a way to view memories from other days besides just the current date. As I started building my prototype, I wanted to keep it simple. I didnât want to have to set up any build tools. I decided to write CSS without Sass. Okay, so that was my intention. But I soon realized that that I was using nesting. I had been working on it a couple of days before I realized it. But my code was working. That is when I realized that the native nesting in CSS works much the same nesting in Sass. I had followed the discussion about implementing nesting in native CSS. At one point, the syntax was going to be very different. To be honest, I lost track of where things had landed because I was continuing to use Sass. Native CSS nesting was not a big concern to me right then. I was amazed when I realized that nesting works just the same way. And it was in that moment that I began to wonder: Is this finally the time to un-Sass? I want to give credit where credit is due. Iâm borrowing the term âun-Sassâ from Stu Robson, who is actually in the middle of writing a series called âUn-Sassâing my CSSâ as I started thinking about writing this post. I love the term âun-Sassâ because it is easy to remember and so spot on to describe what I have been thinking about. Here is what I am taking into consideration: Custom Properties I knew that a lot about what I liked about Sass had started to make its way into native CSS. Custom properties were one of the first things. Custom properties are more powerful than Sass variables because you can assign a new value to a custom property in a media query or in a theming system, like light and dark modes. Thatâs something Sass is unable to do since variables become static once they are compiled into vanilla CSS. You can also assign and update custom properties with JavaScript. Custom properties also work with inheritance and have a broader scope than Sass variables. So, yeah. I found that not only was I already fairly familiar with the concept of variables, thanks to Sass, but the native CSS version was much more powerful. I first used CSS Custom Properties when building two different themes (light and dark) for a client project. I also used them several times with JavaScript and liked how it gave me new possibilities for using CSS and JavaScript together. In my new job, we use custom properties extensively and I have completely switched over to using them in any new code that I write. I made use of custom properties extensively when I redesigned my personal site last year. I took advantage of it to create a light and dark theme and I utilized it with Utopia for typography and spacing utilities. Nesting When Sass introduced nesting, it simplified the writing of CSS code because you write style rules within another style rule (usually a parent). This means that you no longer had to write out the full descendent selector as a separate rule. You could also nest media queries, feature queries, and container queries. This ability to group code together made it easier to see the relationships between parent and child selectors. It was also useful to have the media queries, container queries, or feature queries grouped inside those selectors rather than grouping all the media query rules together further down in the stylesheet. I already mentioned that I stumbled across native CSS nesting when writing code for my memory keeper prototype. I was very excited that the specification extended what I already knew about nesting from Sass. Two years ago, the nesting specification was going to require you to start the nested query with the & symbol, which was different from how it worked in Sass. .footer { a { color: blue } } /* 2023 */ .footer { & a { color: blue } /* This was valid then */ } But that changed sometime in the last two years and you no longer need the ampersand (&) symbol to write a nested query. You can write just as you had been writing it in Sass. I am very happy about this change because it means native CSS nesting is just like I have been writing it in Sass. /* 2025 */ .footer { a { color: blue } /* Today's valid syntax */ } There are some differences in the native implementation of nesting versus Sass. One difference is that you cannot create concatenated selectors with CSS. If you love BEM, then you probably made use of this feature in Sass. But it does not work in native CSS. .card { &__title {} &__body {} &__footer {} } It does not work because the & symbol is a live object in native CSS and it is always treated as a separate selector. Donât worry, if you donât understand that, neither do I. The important thing is to understand the implication â you cannot concatenate selectors in native CSS nesting. If you are interested in reading a bit more about this, I would suggest Kevin Powellâs, âNative CSS Nesting vs. Sass Nestingâ from 2023. Just know that the information about having to use the & symbol before an element selector in native CSS nesting is out of date. I never took advantage of concatenated selectors in my Sass code so this will not have an impact on my work. For me, nesting is native CSS is equivalent to how I was using it in Sass and is one of the reasons why to consider un-Sassing. My advice is to be careful with nesting. I would suggest trying to keep your nested code to three levels at the most. Otherwise, you end up with very long selectors that may be more difficult to override in other places in our codebase. Keep it simple. The color-mix() function I liked using the Sass color module to lighten or darken a color. I would use this most often with buttons where I wanted the hover color to be different. It was really easy to do with Sass. (I am using $color to stand in for the color value). background-color: darken($color, 20%); The color-mix() function in native CSS allows me to do the same thing and I have used it extensively in the past few months since learning about it from Chris Ferdinandi. background-color: color-mix(in oklab, var(--color), #000000 20%); Mixins and functions I know that a lot of developers who use Sass make extensive use of mixins. In the past, I used a fair number of mixins. But a lot of the time, I was just pasting mixins from previous projects. And many times, I didnât make as much use of them as I could because I would just plain forget that I had them. They were always nice helper functions and allowed me to not have to remember things like clearfix or font smoothing. But those were also techniques that I found myself using less and less. I also utilized functions in Sass and created several of my own, mostly to do some math on the fly. I mainly used them to convert pixels into ems because I liked being able to define my typography and spacing as relative and creating relationships in my code. I also had written a function to covert pixels to ems for custom media queries that did not fit within the breakpoints I normally used. I had learned that it was a much better practice to use ems in media queries so that layouts would not break when a user used page zoom. Currently, we do not have a way to do mixins and functions in native CSS. But there is work being done to add that functionality. Geoff wrote about the CSS Functions and Mixins Module. I did a little experiment for the use case I was using Sass functions for. I wanted to calculate em units from pixels in a custom media query. My standard practice is to set the body text size to 100% which equals 16 pixels by default. So, I wrote a calc() function to see if I could replicate what my Sass function provided me. @media (min-width: calc((600 / 16) * 1em)); This custom media query is for a minimum width of 600px. This would work based on my setting the base font size to 100%. It could be modified. Tired of tooling Another reason to consider un-Sassing is that I am simply tired of tooling. Tooling has gotten more and more complex over the years, and not necessarily with a better developer experience. From what I have observed, todayâs tooling is predominantly geared towards JavaScript-first developers, or anyone using a framework like React. All I need is a tool that is easy to set up and maintain. I donât want to have to learn a complex system in order to do very simple tasks. Another issue is dependencies. At my current job, I needed to add some new content and styles to an older WordPress site that had not been updated in several years. The site used Sass, and after a bit of digging, I discovered that the previous developer had used CodeKit to process the code. I renewed my Codekit license so that I could add CSS to style the content I was adding. It took me a bit to get the settings correct because the settings in the repo were not saving the processed files to the correct location. Once I finally got that set, I continued to encounter errors. Dart Sass, the engine that powers Sass, introduced changes to the syntax that broke the existing code. I started refactoring a large amount of code to update the site to the correct syntax, allowing me to write new code that would be processed. I spent about 10 minutes attempting to refactor the older code, but was still getting errors. I just needed to add a few lines of CSS to style the new content I was adding to the site. So, I decided to go rogue and write the new CSS I needed directly in the WordPress template. I have had similar experiences with other legacy codebases, and thatâs the sort of thing that can happen when youâre super reliant on third-party dependencies. You spend more time trying to refactor the Sass code so you can get to the point where you can add new code and have it compiled. All of this has left me tired of tooling. I am fortune enough at my new position that the tooling is all set up through the Django CMS. But even with that system, I have run into issues. For example, I tried using a mixture of percentage and pixels values in a minmax() function and Sass was trying to evaluate it as a math function and the units were incompatible. grid-template-columns: repeat(auto-fill, minmax(min(200px, 100%), 1fr)); I needed to be able to escape and not have Sass try to evaluate the code as a math function: grid-template-columns: repeat(auto-fill, minmax(unquote("min(200px, 100%)"), 1fr)); This is not a huge pain point but it was something that I had to take some time to investigate that I could have been using to write HTML or CSS. Thankfully, that is something Ana Tudor has written about. All of these different pain points lead me to be tired of having to mess with tooling. It is another reason why I have considered un-Sassing. Verdict So what is my verdict â is it time to un-Sass? Please donât hate me, but my conclusion is: it depends. Maybe not the definitive answer you were looking for. But you probably are not surprised. If you have been working in web development even a short amount of time, you know that there are very few definitive ways of doing things. There are a lot of different approaches, and just because someone else solves it differently, does not mean you are right and they are wrong (or vice versa). Most things come down to the project you are working on, your audience, and a host of other factors. For my personal site, yes, I would like to un-Sass. I want to kick the build process to the curb and eliminate those dependencies. I would also like for other developers to be able to view source on my CSS. You canât view source on Sass. And part of the reason I write on my site is to share solutions that might benefit others, and making code more accessible is a nice maintenance enhancement. My personal site does not have a very large codebase. I could probably easily un-Sass it in a couple of days or over a weekend. But for larger sites, like the codebase I work with at my job. I wouldnât suggest un-Sassing it. There is way too much code that would have to be refactored and I am unable to justify the cost for that kind of effort. And honestly, it is not something I feel motivated to tackle. It works just fine the way that it is. And Sass is still a very good tool to use. Itâs not âbreakingâ anything. Your project may be different and there might be more gains from un-Sassing than the project I work on. Again, it depends. The way forward It is an exciting time to be a CSS developer. The language is continuing to evolve and mature. And every day, it is incorporating new features that first came to us through other third-party tools such as Sass. It is always a good idea to stop and re-evaluate your technology decisions to determine if they still hold up or if more modern approaches would be a better way forward. That does not mean we have to go back and âfixâ all of our old projects. And it might not mean doing a complete overhaul. A lot of newer techniques can live side by side with the older ones. We have a mix of both Sass variables and CSS custom properties in our codebase. They donât work against each other. The great thing about web technologies is that they build on each other and there is usually backward compatibility. Donât be afraid to try new things. And donât judge your past work based on what you know today. You did the best you could given your skill level, the constraints of the project, and the technologies you had available. You can start to incorporate newer ways right alongside the old ones. Just build websites! Is it Time to Un-Sass? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The âMost Hatedâ CSS Feature: cos() and sin()
- Articles
- animations
- math
I want to look at practical uses for CSS trigonometric functions. And we'll start with what may be the most popular functions of the "worst" feature: sin()
and cos()
.
The âMost Hatedâ CSS Feature: cos() and sin() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
No feature is truly âthe worstâ in CSS, right? After all, itâs all based on opinion and personal experience, but if we had to reach a consensus, checking the State of CSS 2025 results would be a good starting point. I did exactly that, jumped into the awards section, and there I found it: the âMost Hated Feature,â a title no CSS should have bear⊠This shocks me, if Iâm being honest. Are really trigonometric functions really that hated? I know âhatedâ is not the same as saying something is âworstâ, but it still has an awful ring to it. And I know Iâm being a little dramatic here, since only â9.1% of respondents truly hate trigonometry.â But thatâs still too much shade being thrown for my taste. I want to eliminate that 9.1%. So, in this series, I want to look at practical uses for CSS trigonometric functions. Weâll tackle them in pieces because thereâs a lot to take in and I find it easiest to learn and retain information when itâs chunked into focused, digestible pieces. And weâll start with what may be the most popular functions of the âworstâ feature: sin() and cos(). CSS Trigonometric Functions: The âMost Hatedâ CSS Feature sin() and cos() (You are here!) Tackling the CSS tan() Function (coming soon) Inverse functions: asin(), acos(), atan() and atan2() (coming soon) What the heck are cos() and sin() anyway? This section is for those who cos() and sin() donât quite click yet, or simply want a refresher. If you aced trigonometry quizzes in high school, feel free to skip ahead to the next section! What I find funny about cos() and sin()â and also why I think there is confusion around them â is the many ways we can describe them. We donât have to look too hard. A quick glance at this Wikipedia page has an eye-watering number of super nuanced definitions. This is a learning problem in the web development field. I feel like some of those definitions are far too general and lack detail about the essence of what trigonometric functions like sin() and cos() can do. Conversely, other definitions are overly complex and academic, making them tough to grok without an advanced degree. Letâs stick to the sweet middle spot: the unit circle. Meet the unit circle. It is a circle with a radius of one unit: Right now itâs alone⊠in space. Letâs place it on the Cartesian coordinate system (the classic chart with X and Y axes). We describe each point in space in Cartesian coordinates: The X coordinate: The horizontal axis, plotting the point towards the left or right. The Y coordinate: The vertical axis, plotting the point towards the top or bottom. We can move through the unit circle by an angle, which is measured from the positive X-axis going counter-clockwise. We can go in a clockwise direction by using negative angles. As my physics teacher used to say, âTime is negative!â Notice how each angle lands on a unique point in the unit circle. How else can we describe that point using Cartesian coordinates? When the angle is 0° the X and Y coordinates are 1 and 0 (1, 0), respectively. We can deduce the Cartesian coordinates for other angles just as easily, like 90°, 180° and 270°. But for any other angle, we donât know where the point is initially located on the unit circle. If only there were a pair of functions that take an angle and give us our desired coordinates⊠You guessed it, the CSS cos() and sin() functions do exactly that. And theyâre very closely related, where cos() is designed to handle the X coordinate and sin() returns the Y coordinate. Play with the toggle slider in the following demo to see the relationship between the two functions, and notice how they form a right triangle with the initial point on the unit circle: I think thatâs all you really need to know about cos() and sin() for the moment. Theyâre mapped to Cartesian coordinates, which allows us to track a point along the unit circle with an angle, no matter what size that circle happens to be. Letâs dive into what we can actually use cos() and sin() for our everyday CSS work. Itâs always good to put a little real-world context to theoretical concepts like math. Circular layouts If we go by the unit circle definition of cos() and sin(), then itâs easy to see how they might be used to create circular layouts in CSS. The initial setup is a single row of circular elements: Say we want to place each circular item around the outline of a larger circle instead. First, we would let CSS know the total number of elements and also each elementâs index (the order itâs in), something we can do with an inline CSS variable that holds each order in the position: <ul style="--total: 9"> <li style="--i: 0">0</li> <li style="--i: 1">1</li> <li style="--i: 2">2</li> <li style="--i: 3">3</li> <li style="--i: 4">4</li> <li style="--i: 5">5</li> <li style="--i: 6">6</li> <li style="--i: 7">7</li> <li style="--i: 8">8</li> </ul> Note: This step will become much easier and concise when the sibling-index() and sibling-count() functions gain support (and theyâre really neat). Iâm hardcoding the indexes with inline CSS variables in the meantime. To place the items around the outline of a larger circle, we have to space them evenly by a certain angle. And to get that angle, we can divide 360deg (a full turn around the circle) by the total number of items, which is 8 in this specific example. Then, to get each elementâs specific angle, we can multiply the angle spacing by the elementâs index (i.e., position): li { --rotation: calc(360deg / var(--total) * var(--i)); } We also need to push the items away from the center, so weâll assign a --radius value for the circle using another variable. ul { --radius: 10rem; } We have the elementâs angle and radius. Whatâs left is to calculate the X and Y coordinates for each item. Thatâs where cos() and sin() come into the picture. We use them to get the X and Y coordinates that place each item around the unit circle, then multiply each coordinate by the --radius value to get an itemâs final position on the bigger circle: li { /* ... */ position: absolute; transform: translateX(calc(cos(var(--rotation)) * var(--radius))) translateY(calc(sin(var(--rotation)) * var(--radius))); } Thatâs it! We have a series of eight circular items placed evenly around the outline of a larger circle: And we didnât need to use a bunch of magic numbers to do it! All we provide CSS with is the unit circleâs radius, and then CSS does all the trigonometric gobbledygook that makes so many of us call this the âworstâ CSS feature. Hopefully, Iâve convinced you to soften your opinions on them if thatâs what was holding you back! We arenât limited to full circles, though! We can also have a semicircular arrangement by choosing 180deg instead of 360deg. This opens up lots of layout possibilities. Like, what if we want a circular menu that expands from a center point by transitioning the radius of the circle? We can totally do that: Click or hover the heading and the menu items form around the circle! Wavy layouts Thereâs still more we can do with layouts! If, say, we plot the cos() and sin() coordinates on a two-axis graph, notice how they give us a pair of waves that periodically go up and down. And notice they are offset from each other along the horizontal (X) axis: Where do these waves come from? If we think back to the unit circle we talked about earlier, the value of cos() and sin() oscillate between -1 and 1. In other words, the lengths match when the angle around the unit circle varies. If we graph that oscillation, then weâll get our wave and see that theyâre sorta like reflections of each other. â ïž Auto-playing media Can we place an element following one of these waves? Absolutely. Letâs start with the same single row layout of circular items we made earlier. This time, though, the length of that row spans beyond the viewport, causing overflow. Weâll assign an index position for each item like we did before, but this time we donât need to know the total number of items. We had eight items last time, so letâs bump that up to 10 and pretend like we donât know that: <ul> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <li style="--i: 4"></li> <li style="--i: 5"></li> <li style="--i: 6"></li> <li style="--i: 7"></li> <li style="--i: 8"></li> <li style="--i: 9"></li> <li style="--i: 10"></li> </ul> We want to vary the elementâs vertical position along either a sin() or cos() wave, meaning translating each itemâs position based on its order in the index. Weâll multiply an itemâs index by a certain angle that is passed into the sin() function, and that will return a ratio that describes how high or low the element should be on the wave. The final thing is to multiply that result by a length value, which I calculated as half an itemâs total size. Hereâs the math in CSS-y terms: li { transform: translateY(calc(sin(60deg * var(--i)) * var(--shape-size) / 2)); } Iâm using a 60deg value because the waves it produces are smoother than some other values, but we can vary it as much as we want to get cooler waves. Play around with the toggle in the next demo and watch how the waveâs intensity changes with the angle: This is a great example to see what weâre working with, but how would you use it in your work? Imagine we have two of these wavy chains of circles, and we want to intertwine them together, kinda like a DNA strand. Letâs say weâre starting with the HTML structure for two unordered lists nested inside another unordered list. The two nested unordered lists represent the two waves that form the chain pattern: <ul class="waves"> <!-- First wave --> <li> <ul class="principal"> <!-- Circles --> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <!-- etc. --> </ul> </li> <!-- Second wave --> <li> <ul class="secondary"> <!-- Circles --> <li style="--i: 0"></li> <li style="--i: 1"></li> <li style="--i: 2"></li> <li style="--i: 3"></li> <!-- etc. --> </ul> </li> </ul> Pretty similar to the examples weâve seen so far, right? Weâre still working with an unordered list where the items are indexed with a CSS variable, but now weâre working with two of those lists⊠and theyâre contained inside a third unordered list. We donât have to structure this as lists, but I decided to leave them so I can use them as hooks for additional styling later. To avoid any problems, weâll ignore the two direct <li> elements in the outer unordered list that contain the other lists using display: contents. .waves > li { display: contents; } Notice how one of the chains is the âprincipalâ while the other is the âsecondary.â The difference is that the âsecondaryâ chain is positioned behind the âprincipalâ chain. Iâm using slightly different background colors for the items in each chain, so itâs easier to distinguish one from the other as you scroll through the block-level overflow. We can reorder the chains using a stacking context: .principal { position: relative; z-index: 2; } .secondary { position: absolute; } This positions one chain on top of the other. Next, we will adjust each itemâs vertical position with the âhatedâ sin() and cos() functions. Remember, theyâre sorta like reflections of one another, so the variance between the two is what offsets the waves to form two intersecting chains of items: .principal { /* ... */ li { transform: translateY(calc(sin(60deg * var(--i)) * var(--shape-size) / 2)); } } .secondary { /* ... */ li { transform: translateY(calc(cos(60deg * var(--i)) * var(--shape-size) / 2)); } } We can accentuate the offset even more by shifting the .secondary wave another 60deg: .secondary { /* ... */ li { transform: translateY(calc(cos(60deg * var(--i) + 60deg) * var(--shape-size) / 2)); } } The next demo shows how the waves intersect at an offset angle of 60deg. Adjust the slider toggle to see how the waves intersect at different angles: Oh, I told you this could be used in a practical, real-world way. How about adding a little whimsy and flair to a hero banner: Damped oscillatory animations The last example got me thinking: is there a way to use sin() and cos()âs back and forth movement for animations? The first example that came to mind was an animation that also went back and forth, something like a pendulum or a bouncing ball. This is, of course, trivial since we can do it in a single animation declaration: .element { animation: someAnimation 1s infinite alternate; } This âback and forthâ animation is called oscillatory movement. And while cos() or sin() are used to model oscillations in CSS, it would be like reinventing the wheel (albeit a clunkier one). Iâve learned that perfect oscillatory movement â like a pendulum that swings back and forth in perpetuity, or a ball that never stops bouncing â doesnât really exist. Movement tends to decay over time, like a bouncing spring: â ïž Auto-playing media Thereâs a specific term that describes this: damped oscillatory movement. And guess what? We can model it in CSS with the cos() function! If we graph it over time, then we will see it goes back and forth while getting closer to the resting position1. Wikipedia has another animated example that nicely demonstrates what damped oscillation looks like. In general, we can describe damped oscillation over time as a mathematical function: Itâs composed of three parts: eâÎłt: Due to the negative exponent, it becomes exponentially smaller as time passes, bringing the movement to a gradual stop. It is multiplied by a damping constant (Îł) that specifies how quickly the movement should decay. a: This is the initial amplitude of the oscillation, i.e., the elementâs initial position. cos(Ïtâα): This gives the movement its oscillation as time passes. Time is multiplied by frequency (Ï), which determines an elementâs oscillation speed2. We can also subtract from time α, which we can use to offset the initial oscillation of the system. Okay, enough with all the theory! How do we do it in CSS? Weâll set the stage with a single circle sitting all by itself. We have a few CSS variables we can define that will come in handy since we already know the formula weâre working with: :root { --circle-size: 60px; --amplitude: 200px; /* The amplitude is the distance, so let's write it in pixels*/ --damping: 0.3; --frequency: 0.8; --offset: calc(pi/2); /* This is the same as 90deg! (But in radians) */ } Given these variables, we can peek at what the animation would look like on a graph using a tool like GeoGebra: From the graph, we can see that the animation starts at 0px (thanks to our offset), then peaks around 140px and dies out around 25s in. I, for one, wonât be waiting 25 seconds for the animation to end, so letâs create a --progress property that will animate between 0 to 25, and will act as our âtimeâ in the function. Remember that to animate or transition a custom property, weâve gotta register it with the @property at-rule. @property --progress { syntax: "<number>"; initial-value: 0; inherits: true; } @keyframes movement { from { --progress: 0; } to { --progress: 25; } } Whatâs left is to implement the prior formula for the elementâs movement, which, written in CSS terms, looks like this: .circle { --oscillation: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * cos(var(--frequency) * (var(--progress)) - var(--offset)) ); transform: translateX(var(--oscillation)); animation: movement 1s linear infinite; } This gives a pretty satisfying animation by itself, but the damped motion is only on the x-axis. What would it look like if, instead, we applied the damped motion on both axes? To do this, we can copy the same oscillation formula for x, but replace the cos() with sin(). .circle { --oscillation-x: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * cos(var(--frequency) * (var(--progress)) - var(--offset)) ); --oscillation-y: calc( (exp(-1 * var(--damping) * var(--progress))) * var(--amplitude) * sin(var(--frequency) * (var(--progress)) - var(--offset)) ); transform: translateX(var(--oscillation-x)) translateY(var(--oscillation-y)); animation: movement 1s linear infinite; } This is even more satisfying! A circular and damped motion, all thanks to cos() and sin(). Besides looking great, how could this be used in a real layout? We donât have to look too hard. Take, for example, this sidebar I recently made where the menu items pop in the viewport with a damped motion: Pretty neat, right?! More trigonometry to come! Well, finding uses for the âmost hated CSS featureâ wasnât that hard; maybe we should start showing some love to trigonometric functions. But wait. There are still several trigonometric functions in CSS we havenât talked about. In the following posts, weâll keep exploring what trig functions (like tan() and inverse functions) can do in CSS. CSS Trigonometric Functions: The âMost Hatedâ CSS Feature sin() and cos() (You are here!) Tackling the CSS tan() Function (coming soon) Inverse functions: asin(), acos(), atan() and atan2() (coming soon) Also, before I forget, here is another demo I made using cos() and sin() that didnât make the cut in this article, but it is still worth checking out because it dials up the swirly-ness from the last example to show how wacky we can get. Footnotes This kind of damped oscillatory movement, where the back and forth is more visible, is called underdamped oscillation. There are also overdamped and critically damped oscillations, but we wonât focus on them here. âȘïž In reality, the damped constant and the frequency are closely related. You can read more about damped oscillation in this paper. âȘïž The âMost Hatedâ CSS Feature: cos() and sin() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
What Can We Actually Do With corner-shape?
- Articles
- css properties
- shapes
When I first started messing around with code, rounded corners required five background images or an image sprite likely created in Photoshop, so when border-radius
came onto the scene, I remember everybody thinking that it was the best thing ever. âŠ
What Can We Actually Do With corner-shape? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
When I first started messing around with code, rounded corners required five background images or an image sprite likely created in Photoshop, so when border-radius came onto the scene, I remember everybody thinking that it was the best thing ever. Web designs were very square at the time, so to have border-radius was super cool, and it saved us a lot of time, too. Chrisâ border-radius article from 2009, which at the time of writing is 16 years old (wait, how old am I?!), includes vendor prefixes for older web browsers, including âold Konqueror browsersâ (-khtml-border-radius). What a time to be alive! Weâre much less excited about rounded corners nowadays. In fact, sharp corners have made a comeback and are just as popular now, as are squircles (square-ish circles or circle-y squares, take your pick), which is exactly what the corner-shape CSS property enables us to create (in addition to many other cool UI effects that Iâll be walking you through today). At the time of writing, only Chrome 139 and above supports corner-shape, which must be used with the border-radius property or/and any of the related individual properties (i.e., border-top-left-radius, border-top-right-radius, border-bottom-right-radius, and border-bottom-left-radius): Snipped corners using corner-shape: bevel These snipped corners are becoming more and more popular as UI designers embrace brutalist aesthetics. In the example above, itâs as easy as using corner-shape: bevel for the snipped corners effect and then border-bottom-right-radius: 16px for the size. corner-shape: bevel; border-bottom-right-radius: 16px; We can do the same thing and it really works with a cyberpunk aesthetic: Slanted sections using corner-shape: bevel Slanted sections is a visual effect thatâs even more popular, probably not going anywhere, and again, helps elements to look a lot less like the boxes that they are. Before we dive in though, itâs important to keep in mind that each border radii has two semi-major axes, a horizontal axis and a vertical axis, with a âpointâ (to use vector terminology) on each axis. In the example above, both are set to 16px, so both points move along their respective axis by that amount, away from their corner of course, and then the beveled line is drawn between them. In the slanted section example below, however, we need to supply a different point value for each axis, like this: corner-shape: bevel; border-bottom-right-radius: 100% 50px; The first point moves along 100% of the horizontal axis whereas the second point travels 50px of the vertical axis, and then the beveled line is drawn between them, creating the slant that you see above. By the way, having different values for each axis and border radius is exactly how those cool border radius blobs are made. Sale tags using corner-shape: round bevel bevel round Youâve see those sale tags on almost every e-commerce website, either as images or with rounded corners and not the pointy part (other techniques just arenât worth the trouble). But now we can carve out the proper shape using two different types of corner-shape at once, as well as a whole set of border radius values: Youâll need corner-shape: round bevel bevel round to start off. The order flows clockwise, starting from the top-left, as follows: top-left top-right bottom-right bottom-left Just like with border-radius. You can omit some values, causing them to be inferred from other values, but both the inference logic and resulting value syntax lack clarity, so Iâd just avoid this, especially since weâre about to explore a more complex border-radius: corner-shape: round bevel bevel round; border-radius: 16px 48px 48px 16px / 16px 50% 50% 16px; Left of the forward slash (/) we have the horizontal-axis values of each corner in the order mentioned above, and on the right of the /, the vertical-axis values. So, to be clear, the first and fifth values correspond to the same corner, as do the second and sixth, and so on. You can unpack the shorthand if itâs easier to read: border-top-left-radius: 16px; border-top-right-radius: 48px 50%; border-bottom-right-radius: 48px 50%; border-bottom-left-radius: 16px; Up until now, weâve not really needed to fully understand the border radius syntax. But now that we have corner-shape, itâs definitely worth doing so. As for the actual values, 16px corresponds to the round corners (this oneâs easy to understand) while the 48px 50% values are for the bevel ones, meaning that the corners are âdrawnâ from 48px horizontally to 50% vertically, which is why and how they head into a point. Regarding borders â yes, the pointy parts would look nicer if they were slightly rounded, but using borders and outlines on these elements yields unpredictable (but I suspect intended) results due to how browsers draw the corners, which sucks. Arrow crumbs using the same method Yep, same thing. We essentially have a grid row with negative margins, but because we canât create âinsetâ arrows or use borders/outlines, we have to create an effect where the fake borders of certain arrows bleed into the next. This is done by nesting the exact same shape in the arrows and then applying something to the effect of padding-right: 3px, where 3px is the value of the would-be border. The code comments below should explain it in more detail (the complete code in the Pen is quite interesting, though): <nav> <ol> <li> <a>Step 1</a> </li> <li> <a>Step 2</a> </li> <li> <a>Step 3</a> </li> </ol> </nav> ol { /* Clip nâ round */ overflow: clip; border-radius: 16px; li { /* Arrow color */ background: hsl(270 100% 30%); /* Reverses the z-indexes, making the arrows stack */ /* Result: 2, 1, 0, ... (sibling-x requires Chrome 138+) */ z-index: calc((sibling-index() * -1) + sibling-count()); &:not(:last-child) { /* Arrow width */ padding-right: 3px; /* Arrow shape */ corner-shape: bevel; border-radius: 0 32px 32px 0 / 0 50% 50% 0; /* Pull the next one into this one */ margin-right: -32px; } a { /* Same shape */ corner-shape: inherit; border-radius: inherit; /* Overlay background */ background: hsl(270 100% 50%); } } } Tooltips using corner-shape: scoop To create this tooltip style, Iâve used a popover, anchor positioning (to position the caret relative to the tooltip), and corner-shape: scoop. The caret shape is the same as the arrow shape used in the examples above, so feel free to switch scoop to bevel if you prefer the classic triangle tooltips. A quick walkthrough: <!-- Connect button to tooltip --> <button popovertarget="tooltip" id="button">Click for tip</button> <!-- Anchor tooltip to button --> <div anchor="button" id="tooltip" popover>Donât eat yellow snow</div> #tooltip { /* Define anchor */ anchor-name: --tooltip; /* Necessary reset */ margin: 0; /* Center vertically */ align-self: anchor-center; /* Pin to right side + 15 */ left: calc(anchor(right) + 15px); &::after { /* Create caret */ content: ""; width: 5px; height: 10px; corner-shape: scoop; border-top-left-radius: 100% 50%; border-bottom-left-radius: 100% 50%; /* Anchor to tooltip */ position-anchor: --tooltip; /* Center vertically */ align-self: anchor-center; /* Pin to left side */ right: anchor(left); /* Popovers have this already (required otherwise) */ position: fixed; } } If youâd rather these were hover-triggered, the upcoming Interest Invoker API is what youâre looking for. Realistic highlighting using corner-shape: squircle bevel The <mark> element, used for semantic highlighting, defaults with a yellow background, but it doesnât exactly create a highlighter effect. By adding the following two lines of CSS, which admittedly I discovered by experimenting with completely random values, we can make it look more like a hand-waved highlight: mark { /* A...squevel? */ corner-shape: squircle bevel; border-radius: 50% / 1.1rem 0.5rem 0.9rem 0.7rem; /* Prevents background-break when wrapping */ box-decoration-break: clone; } We can also use squircle by itself to create those fancy-rounded app icons, or use them on buttons/cards/form controls/etc. if you think the âoldâ border radius is starting to look a bit stale: Hand-drawn boxes using the same method Same thing, only larger. Kind of looks like a hand-drawn box? Admittedly, this effect doesnât look as awesome on a larger scale, so if youâre really looking to wow and create something more akin to the Red Dead Redemption aesthetic, this border-image approach would be better. Clip a background with corner-shape: notch Notched border radii are ugly and I wonât hear otherwise. I donât think youâll want to use them to create a visual effect, but Iâve learned that theyâre useful for background clipping if you set the irrelevant axis to 50% and the axis of the side that you want to clip by the amount that you want to clip it by. So if you wanted to clip 30px off the background from the left for example, youâd choose 30px for the horizontal axes and 50% for the vertical axes (for the -left-radius properties only, of course). corner-shape: notch; border-top-left-radius: 30px 50%; border-bottom-left-radius: 30px 50%; Conclusion So, corner-shape is actually a helluva lot of fun. It certainly has more uses than I expected, and no doubt with some experimentation youâll come up with some more. With that in mind, Iâll leave it to you CSS-Tricksters to mess around with (remember though, youâll need to be using Chrome 139 or higher). As a parting gift, I leave you with this very cool but completely useless CSS Tie Fighter, made with corner-shape and anchor positioning: What Can We Actually Do With corner-shape? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Compiling Multiple CSS Files into One
- Links
- css preprocessors
- node
- postcss
- Sass
Stu Robson outlines two ways to compile multiple CSS files when you aren't relying on Sass for it.
Compiling Multiple CSS Files into One originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Stu Robson is on a mission to âun-Sassâ his CSS. I see articles like this pop up every year, and for good reason as CSS has grown so many new legs in recent years. So much so that much of the core features that may have prompted you to reach for Sass in the past are now baked directly into CSS. In fact, we have Jeff Bridgforth on tap with a related article next week. What I like about Stuâs stab at this is that itâs an ongoing journey rather than a wholesale switch. In fact, heâs out with a new post that pokes specifically at compiling multiple CSS files into a single file. Splitting and organizing styles into separate files is definitely the reason I continue to Sass-ify my work. I love being able to find exactly what I need in a specific file and updating it without having to dig through a monolith of style rules. But is that a real reason to keep using Sass? Iâve honestly never questioned it, perhaps due to a lizard brain that doesnât care as long as something continues to work. Oh, I want partialized style files? Always done that with a Sass-y toolchain that hasnât let me down yet. I know, not the most proactive path. Stu outlines two ways to compile multiple CSS files when you arenât relying on Sass for it: Using PostCSS Ah, thatâs right, we can use PostCSS both with and without Sass. Itâs easy to forget that PostCSS and Sass are compatible, but not dependent on one another. postcss main.css -o output.css Stu explains why this could be a nice way to toe-dip into un-Sassâing your work: PostCSS can seamlessly integrate with popular build tools like webpack, Gulp, and Rollup, allowing you to incorporate CSS compilation into your existing development workflow without potential, additional configuration headaches. Custom Script for Compilation The ultimate thing would be eliminating the need for any dependencies. Stu has a custom Node.js script for that: const fs = require('fs'); const path = require('path'); // Function to read and compile CSS function compileCSS(inputFile, outputFile) { const cssContent = fs.readFileSync(inputFile, 'utf-8'); const imports = cssContent.match(/@import\s+['"]([^'"]+)['"]/g) || []; let compiledCSS = ''; // Read and append each imported CSS file imports.forEach(importStatement => { const filePath = importStatement.match(/['"]([^'"]+)['"]/)[1]; const fullPath = path.resolve(path.dirname(inputFile), filePath); compiledCSS += fs.readFileSync(fullPath, 'utf-8') + '\n'; }); // Write the compiled CSS to the output file fs.writeFileSync(outputFile, compiledCSS.trim()); console.log(`Compiled CSS written to ${outputFile}`); } // Usage const inputCSSFile = 'index.css'; // Your main CSS file const outputCSSFile = 'output.css'; // Output file compileCSS(inputCSSFile, outputCSSFile); Not 100% free of dependencies, but geez, what a nice way to reduce the overhead and still combine files: node compile-css.js This approach is designed for a flat file directory. If youâre like me and prefer nested subfolders: With the flat file structure and single-level import strategy I employ, nested imports (you can do with postcss-import arenât necessary for my project setup, simplifying the compilation process while maintaining clean organisation. Very cool, thanks Stu! And check out the full post because thereâs a lot of helpful context behind this, particularly with the custom script. Compiling Multiple CSS Files into One originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Whatâre Your Top 4 CSS Properties?
- Articles
- css properties
- opinion
Everyone has a different opinion which is great because it demonstrates the messy, non-linear craft that is thinking like a front-end developer.
Whatâre Your Top 4 CSS Properties? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Thatâs what Donnie DâAmato asks in a recent post: You are asked to build a website but you can use only 4 CSS properties, what are those? This really got the CSS-Tricks team talking. Itâs the nerdy version of âif you could only take one album with you on a remote islandâŠâ And everyone had a different opinion which is great because it demonstrates the messy, non-linear craft that is thinking like a front-end developer. Seems like a pretty straightforward thing to answer, right? But like Donnie says, this takes some strategy. Like, say spacing is high on your priority list. Are you going to use margin? padding? Perhaps youâre leaning into layout and go with gap as part of a flexbox direction⊠but then youâre committing to display as one of your options. That can quickly eat up your choices! Our answers are pretty consistent, but converged even more as the discussion wore on and all of us were coming at it with different priorities. Iâll share each personâs âgutâ reaction because I like how raw it is. I think youâll see that thereâs always a compromise in the mix, but those compromises really reveal a personâs cards as far as what they think is most important in a situation with overly tight constraints. Juan Diego Rodriguez Juan and I came out pretty close to the same choices, as weâll see in a bit: font: Typography is a priority and we get a lot of constituent properties with this single shorthand. padding: A little padding makes things breath and helps with visual separation. background: Another shorthand with lots of styling possibilities in a tiny package. color: More visual hierarchy. But he was debating with himself a bit in the process: Thinking about switching color with place-items, since it works in block elements. grid would need display, though). Ryan Trimble Ryanâs all about that bass structure: display: This opens up a world of layouts, but most importantly flex. flex-direction: Itâs a good idea to consider multi-directional layouts that are easily adjustable with media queries. width: This helps constrain elements and text, as well as divide up flex containers. margin: This is for spacing thatâs bit more versatile than gap, while also allowing us to center elements easily. And Ryan couldnât resist reaching a little out of bounds: For automatic color theme support, and no extra CSS properties required: <meta name="color-scheme" content="dark light"> Danny Schwarz Every team needs a wild card: On the contrary I think Iâd choose font, padding, and color. I wouldnât even choose a 4th. font: This isnât a big surprise if youâre familiar with Dannyâs writing. padding: So far, Ryanâs the only one to eschew padding as a core choice! color: Too bad this isnât baked right into font! Iâll also point out that Danny soon questioned his decision to use all four choices: I supposed weâd need width to achieve a good line length. Sunkanmi Fafowora This is the first list to lean squarely into CSS Grid, allowing the grid shorthand to take up a choice in favor of having a complete layout system: font: This is a popular one, right? display: Makes grid available grid: Required for this display approach color: For sprinkling in text color where it might help I love that Ryan and Sunkanmi are thinking in terms of structure, albeit in very different ways for different reasons! Zell Liew In Zellâs own words: âReally really plain and simple site here.â font: Content is still the most important piece of information. max-width: Ensures type measure is ok. margin: Lets me play around with spacing. color: This ensures thereâs no pure black/white contrast that hurts the eyes. Iâd love for background as well, but we only have four choices. But thereâs a little bit of nuance in those choices, as he explains: âBut Iâd switch up color for background on sites with more complex info that requires proper sectioning. In that case Iâd also switch margin with padding.â Amit Sheen Getting straight to Amitâs selections: font color background color-scheme The choices are largely driven by wanting to combat default user agent styles: The thing is, if we only have four properties, we end up relying heavily on the user agents, and the only thing Iâd really want to change is the fonts. But while we are at it, letâs add some color control. Iâm not sure how much Iâd actually use them, but it would be good to have them available. Geoff Graham Alright, Iâm not quite as exciting now that youâve seen everyone elseâs choices. Youâll see a lot of overlap here: font: A shorthand for a whopping SEVEN properties for massaging text styles. color: Seems like this would come in super handy for establishing a visual hierarchy and distinguishing one element from another. padding: I canât live without a little breathing room between an elementâs content box and its inner edge. color-scheme: Good minimal theming thatâll work nicely alongside color and support the light-dark() function. Clearly, Iâm all in on typography. That could be a very good thing or it could really constrain me when it comes to laying things out. I really had to fight the urge to use display because I always find it incredibly useful for laying things out side-by-side that wouldnât otherwise be possible with block-level elements. Your turn! Curious minds want to know! Which four properties would you take with you on a desert island? Whatâre Your Top 4 CSS Properties? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Composition in CSS
- Articles
- cascade
- CSS
- specificity
CSS is a composable language by nature. This composition nature is already built into the cascade. We simply don't talk about composition as a Big Thing because it's the nature of the language.
Composition in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Tailwind and other utility libraries have been huge proponents of composition. But, to me, their version of composition has always carried a heavy sense of naĂŻvetĂ©. I mean, utility composition is basically adding CSS values to the element, one at a time⊠<div class="p-4 border-2 border-blue-500"> ... </div> If weâre honest for a minute, how is this composition different from adding CSS rules directly into a class? /* This is composition too! */ .card { padding: 1rem; border: 2px solid var(âcolor-blue-500) } That said, I canât deny the fact that Iâve been thinking a lot more about composition ever since I began using Tailwind. So, here are a couple of notes that Iâve gathered together about CSS composition. Itâs not a new concept CSS is a composable language by nature. This composition nature is already built into the cascade. Letâs say youâve decided to style a button with a few properties: .button { display: inline-flex; padding: 0.75em 1.5em; /* other styles... */ } You can always tag on other classes to modify the buttonâs appearance: <button class="button primary"> ... </button> <button class="button secondary"> ... </button> .primary { background: orange; } .secondary { background: pink; } You can even change the appearance of other elements to a button by adding the .button class: <a href="#" class="button"> ... </a> Composition is happening in both cases: We composed .button onto a We composed .red onto .button So, CSS composition has been in existence since forever. We simply donât talk about composition as a Big Thing because itâs the nature of the language. Developers take a pretty narrow view of composition When developers talk about composition in CSS, they always seem to always restrict the definition of composition to the addition of classes in the HTML. <div class="one two"> ... </div> Whatâs interesting is that few people, if any, speak about composition within CSS files â from the angle of using Sass mixins or advanced Tailwind utilities. In these cases, we are also composing styles⊠just not directly in the HTML! @mixin button () { display: inline-flex; padding: 0.75em 1.5em; /* other styles ... */ } .button { @include button; } What is composition? Composition comes from two possible words: Compose: Put together Composite: Made up of distinct parts or elements Both words come from the same Latin root componere, which means to arrange or direct. In other words⊠all work is put together in some way, so all work is composed. This makes me wonder why composition is used in such a limited context. đ€ Moving on⊠Composition doesnât reduce bloat Class composition reduces CSS bloat only if youâre using utility classes. However, class composition with utility classes is likely to create HTML bloat. <div class="utility composition">...</div> <div class="one utility at a time">...</div> <div class="may create html bloat">...</div> On the other hand, class composition with selectors might not reduce CSS bloat. But they definitely introduce lesser HTML bloat. <div class="class composition">...</div> <div class="card primary">...</div> <div class="may override properties">...</div> <div class="less html bloat"> ... </div> Which is better? ÂŻ\_(ă)_/ÂŻ HTML bloat and CSS bloat are probably the least of your concerns We know this: HTML can contain a huge amount of things and it doesnât affect performance much. CSS, too. 500 lines of CSS is approx 12kb to 15kb (according to Claude). An image typically weighs 150kb or perhaps even more. For most projects, optimizing your use of images is going to net you better weight reduction than agonizing over utility vs. selector composition. Refactoring your codebase to decrease CSS bloat is not likely to increase performance much. Maybe a 2ms decrease in load times? But refactoring your codebase to improve developer recognition and make it easier to style? Much more worth it. So, Iâd say: HTML and CSS bloat are pretty inconsequential. Itâs worthwhile to focus on architecture, structure, and clarity instead. Advanced compositions If we zoom out, we can see that all styles we write fall into four categories: Layouts: Affects how we place things on the page Typography: Everything font related Theming: Everything color related Effects: Nice good to have stuff like gradients, shadows, etc. Styles from each of these four categories donât intersect with each other. For example: font-weight belongs exclusively to the Typography category colour belongs exclusively to the Theming category It makes sense to create composable classes per category â when thatâs done, you can mix-and-match these classes together to create the final output. Very much like Lego, for the lack of a better example. (Alright, maybe Duplo for the kids?) So your HTML might end up looking like this, assuming you do class composition for these four categories: <!-- These are all pseudo classes. Use your imagination for now! --> <div class="layout-1 layout-2 effects-1"> <h2 class="typography-1 theming-1"> ... </div> <div class="typography-2"> ... </div> </div> A real example of this would be the following, if we used classes from Splendid Styles and Splendid Layouts: <div class="card vertical elevation-3"> <h2 class="inter-title"> ... </h2> <div class="prose"> ... </div> </div> Iâm writing more about this four-category system and how Iâm creating composable classes in my latest work: Unorthodox Tailwind. Give it a check if youâre interested! Wrapping up To sum up: CSS is composable by nature. Developers seem to be quite narrow-minded about what composition means in CSS. You can do composition in the HTML or in the CSS. Styles we write can be divided into four categories â layouts, typography, theming, and effects. And finally: Splendid Styles contains classes that can aid composition in each of these four categories. Splendid Layouts handles the layout portion. And Iâm writing more about how I create composable classes in my course Unorthodox Tailwind. Composition in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
What You Need to Know About CSS Color Interpolation
- Articles
- color
- CSS functions
Color what? Sunkanmi Fafowora explains how an everyday task for CSS can be used to create better colors experiences.
What You Need to Know About CSS Color Interpolation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Color interpolation, loosely speaking, is the process of determining the colors between two color points. It allows us to create unique colors, beautiful palettes, better gradients, and smooth transitions. I recently wrote a Guide to CSS Color Functions but didnât have the chance to explain color interpolation in any great depth â which is a shame, since it allows us to create cool demos like this one: Did you notice how oklch(80% 0.3 340) interpolates to oklch(80% 0.3 60), then to oklch(80% 0.3 180), then to oklch(80% 0.3 270) and back to oklch(80% 0.3 340) using CSS animation? Well, I did! And thatâs just a powerful use of interpolation. Where can we use color interpolation? Again, color interpolation is all over CSS. These properties and functions support color interpolation either through direct mixing, gradients, or transitions: All color gradients, likelinear-gradient(), conic-gradient(), etc.) color-mix() animation transition filter All functions in the relative color syntax In gradients and the color-mix() function, we even have a formal syntax for color interpolation: <color-interpolation-method> = in [ <rectangular-color-space> | <polar-color-space> <hue-interpolation-method>? ] <color-space> = <rectangular-color-space> | <polar-color-space> <rectangular-color-space> = srgb | srgb-linear | display-p3 | a98-rgb | prophoto-rgb | rec2020 | lab | oklab | xyz | xyz-d50 | xyz-d65 <polar-color-space> = hsl | hwb | lch | oklch <hue-interpolation-method> = [ shorter | longer | increasing | decreasing ] hue Yes, thatâs a convoluted definition, but if we go ahead and inspect how this syntax works in color-mix(), for example, we would have something like this: .element{ color: color-mix(in lch longer hue, red, blue); } The CSS color-mix() function provides a way for us to mix different colors in any color space, which is all what color interpolation is about: going from color to another. Our key focus is the in lab longer hue part, which specifies how color-mix() does the interpolation. This is basically saying, âHey CSS, interpolate the next colors in the CIELCH color space using a longer hue arc.â Yes, the in lab part means the interpolation is done in CIELCH, one of the many CSS color spaces, but weâll get to what longer hue exactly means later. Just remember: The in keyword always precedes the color interpolation method. The second value is the color space used for mixing. The third value is an optional hue interpolation method ending with the hue keyword. This same syntax appears in all gradient functions, where colors are interpolated gradually to get a smooth gradient. Look at how tweaking the gradient with the color interpolation syntax can give us a completely new gradient: .element { background: linear-gradient(in oklch longer hue 90deg, magenta, cyan); } Letâs backtrack a little, though. Interpolation can occur in two major color spaces: rectangular and polar. Rectangular color spaces Rectangular color spaces represent colors using Cartesian coordinates on a three-dimensional plane, which you might already know as the X (horizontal), Y (vertical), and Z (depth) axes on a graph. Rectangular color spaces are like the same sort of graph, but is a map of color points instead. For example, the sRGB color space has three axes, responsible for the amount of a colorâs redness, blueness, and greenness. Polar color spaces Polar color spaces also represent colors in a three-dimensional plane, just like rectangular color spaces, but it is shaped like a cylinder instead of a rectangular. A color point is represented by three values: The height from the point to the center, usually assigned to lightness or brightness. The radial distance from the center, usually assigned to chroma or saturation. The angle around the center, assigned to the hue. Credit: Wikipedia What makes polar color spaces unique is the hue angle. Since itâs an angle, and they are cyclic (like a continuous circle), we have more options for how it can be interpolated. Hue interpolation Think of hue interpolation like finding the distance between the two times on a clock. Letâs assume the clock can go clockwise (forwards) or counterclockwise (backwards) in time. The minute hand is at 10 minutes (2). If we want to take the shortest distance between 50 minutes (10), then we would make a counterclockwise turn, like going back in time since that is shorter than moving forward in a clockwise direction. Thatâs because if you take the longer route, youâll have to pass through 3, 4, 5, etc. all the way to 10. Taking the shorter counterclockwise) route , you would reach 10 in less time (15 minutes). Hue interpolation works similarly. It is a CSS algorithm that determines how you want hue colors in polar color spaces to be mixed, and the direction you want to take between two hue points. There are four types of hue interpolation in CSS. Letâs go over those next. shorter and longer The shorter (default value) hue interpolation method simply takes the shorter route, while the longer hue interpolation method takes the longer route when mixing colors between two hue points. Imagine blending two hue values red (0deg) and blue (240deg). There are two ways to do this: Go the longer route (distance of 240deg). Go the shorter route (distance of 120deg). If shorter is used, the browser takes the shorter route (120deg). Otherwise, if longer is used, the browser takes the longer route (240deg). This offers up a nice and unique blend of colors depending on your preferences. Hue interpolation is useful in creating smooth color transitions and gradients, giving plenty of life to the websites using color. The shorter or longer hue interpolation, depending on the shortest or longest distances between two hue value points, can either go clockwise or counterclockwise. We can also set this automatically without actually using one of these keywords, which we will look at next. increasing and decreasing Sticking with our clock analogy, the increasing hue interpolation method is like moving the minutes hand from 2 to 10, always in a clockwise direction. Even if the final value is 1, it would still go in a clockwise direction, doing almost a full turn. If, however, the hue interpolation method is set to decreasing, the minutes hand will always go in a counterclockwise direction. As the specification says, â[d]epending on the difference between the two angles, this will either look the same as shorter or as longer.â If the angle goes from 20deg to 50deg using the increasing hue interpolation value, the value will move clockwise from 20deg to 50deg, displaying the colors in between. However, if the hue interpolation method is set to decreasing, then the algorithm takes the value from 20deg to 50deg in a counterclockwise direction. Since increasing means the clockâs minute hand is constantly moving forward, this means the value can reach up to 360deg, a full circle. If the angle reaches 360deg, it resets back to 0deg until it reaches the next point. But if decreasing reaches 0deg, then it resets to 360deg, keeping the hue change consistent. How is this useful? Yes, all this theory is great: we can use interpolation to get the intermediary color(s) between two colors and make new kinds of colors, but how can we actually use it to create better color experiences in CSS? Creating gradients Color interpolation happens frequently in all CSS gradient functions. Take, for example, the conic-gradient() function, which makes it easy to create a smooth transition of colors that rotate around a center point: background: conic-gradient( from 0deg, oklch(70% 0.3 0deg), oklch(70% 0.3 120deg), oklch(70% 0.3 240deg), oklch(70% 0.3 360deg) ); Notice how the hue blends smoothly between each color stop point? Itâs beautiful. Color mixing Reading about color-mix() in the CSS-Tricks Almanac will give you a basic idea of how this is done, but if youâre like me and want the raw code, here it is: /* First Box */ background-color: color-mix(in oklch, rgb(255 0 0) 50%, lch(60% 40% 220deg) 50%); /* Second Box */ background-color: color-mix(in oklch longer hue, rgb(255 0 0) 50%, lch(60% 40% 220deg) 50%); A great advantage of color-mix() is that you gain the ability to mix colors in different color spaces within another color space, thereby producing a unique color. Again, itâs moving from one color into another and the direction we take for mixing colors matters. Animation We can animate the transition between colors! So, instead of mixing two specific points, we can watch the color transition between all of the colors in between the two points! @keyframes bg-shift { from { background-color: oklch(30% 0.3 20deg); /* dark pink */ } to { background-color: oklch(70% 0.3 200deg); /* Cool bluish */ } } References Okay, Color Spaces by Eric Portis ColorAide Color Interpolation Documentation CSS Color Module Level 4 Interpolating Colors by Chris Brunell What You Need to Know About CSS Color Interpolation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
New requirement for apps using Sign in with Apple for account creation
Starting January 1, 2026, developers based in the Republic of Korea must provide a serverâtoâserver notification endpoint when registering a new Services ID, or updating an existing Services ID, to associate their website with an app using Sign in with Apple.
As a reminder, registering a notification endpoint allows Apple to send you important updates about the people who use your app and their account status, including:
- Changes in email forwarding preferences.
- Account deletions in your app.
- Permanent Apple Account deletions.
To learn more about server-to-server notifications, see WWDC20 session 10173: Get the most out of Sign in with Apple.
When you receive these notifications, you should immediately update any data associated with the account change in the app, as well as any necessary server infrastructure, to give people more control of the personal data theyâve shared. For more information, see Processing changes for Sign in with Apple accounts.
Before submitting a new app to the App Store, or updating an existing app configuration to register a new Services ID or modify an existing Services ID, please read the guidance below.
Account change guidanceAccount changes are directly related to privacy and control for the user and their personal data, and confirming account changes should be straightforward and transparent.
For account email forwarding changes:
- Ensure any displayed user data affected by the account change matches the change event in the notification payload. Typically, this data is displayed in the appâs account settings or user profile.
- If people need to visit a website to finish changing or verifying their email address, include a link directly to the page on your website where they can complete the process.
- Keep users informed. If the email forwarding change affects other services you offer, let them know. If your app supports In-App Purchases, help people understand how billing, order tracking, and cancellations will be handled with the new email address.
For account deletions:
Note: Always follow applicable legal requirements for storing and retaining user account information and for handling account changes and deletions. This includes complying with local laws where your apps are available. If you have questions regarding your legal obligations, check with your legal counsel.
ResourcesUpdated Apple Developer Program License Agreement now available
The Apple Developer Program License Agreement has been revised to support updated policies and to provide clarification. Please review the changes below and sign in to your account to accept the updated terms.
- Section 3.3.1(B): Updated requirements related to interpreted code
- Definitions, Section 3.3.3(F): Specified requirements for use of the Location Push Service Extension
- Section 3.3.8(A), Attachment 4: Added links to terms for Xcode Cloud and CloudKit related to Regulation (EU) 2023/2854 (EU Data Act)
- Section 3.3.9(D): Updated terms regarding the entity that distributes the Tap to Pay APIs
- Schedule 1, Exhibit C: Updated Section 1 definition for Promo Codes
- Schedule 2, Section 3.13: Updated references to Offer Codes
- Schedule 2, Section 3.13(b): Updated availability terms for Offer Codes
Translations of the updated agreement will be available on the Apple Developer website within one month.
New requirements for apps available in Texas
Beginning January 1, 2026, a new state law in Texas â SB2420 â introduces age assurance requirements for app marketplaces and developers. While we share the goal of strengthening kidsâ online safety, we are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores. Apple will continue to provide parents and developers with industry-leading tools that help enhance child safety while safeguarding privacy within the constraints of the law.
Once this law goes into effect, users located in Texas who create a new Apple Account will be required to confirm whether they are 18 years or older. All new Apple Accounts for users under the age of 18 will be required to join a Family Sharing group, and parents or guardians will need to provide consent for all App Store downloads, app purchases, and transactions using Apple's In-App Purchase system by the minor. This will also impact developers, who will need to adopt new capabilities and modify behavior within their apps to meet their obligations under the law. Similar requirements will come into effect later next year in Utah and Louisiana.
Today weâre sharing details about updates that weâre making and the tools weâll provide to help developers meet these new requirements.
To assist developers in meeting their obligations in a privacy-preserving way, weâll introduce capabilities to help them obtain usersâ age categories and manage significant changes as required by Texas state law. The Declared Age Range API is available to implement now, and will be updated in the coming months to provide the required age categories for new account users in Texas. And new APIs launching later this year will enable developers, when they determine a significant change is made to their app, to invoke a system experience to allow the user to request that parental consent be re-obtained. Additionally, parents will be able to revoke consent for a minor continuing to use an app. More details, including additional technical documentation, will be released later this fall.
We know protecting kids from online threats requires constant vigilance and effort. Thatâs why we will continue to create industry-leading features to help developers provide age-appropriate experiences and safeguard privacy in their apps and games, and empower parents with a comprehensive set of tools to help keep their kids safe online.
Hello Developer: October 2025
This month: Watch a new code-along video to discover how to use the Foundation Models framework to access the on-device LLM that powers Apple Intelligence.
Plus, dive into new games offerings, explore new App Store Connect features, and find out how the air-quality app Paku is helping its users breathe easy.
Upcoming Currency Change in Bulgaria
Beginning January 1, 2026, Bulgaria will adopt the euro (EUR) as its official currency, replacing the Bulgarian lev (BGN). As a result, the currency for App Store purchases and subscriptions in Bulgaria will change from BGN to EUR on this date. Your proceeds from sales in Bulgaria will also change to euros.
If youâve selected the lev as the currency for your bank in App Store Connect, it will automatically change to the euro on December 1, 2025. If youâve selected a different currency for your bank account, youâll continue to be paid in that currency. The statutory fixed conversion rate is 1.95583 BGN = 1 EUR as established on July 8, 2025, by the Council of the European Union in connection with the introduction of the euro as the official currency in the Republic of Bulgaria.
- Auto-renewing subscriptions, manually priced apps and InâApp Purchases, or apps and InâApp Purchases with Bulgaria as the base storefront: Pricing on the Bulgaria storefront will be updated to a euro price based on the statutory conversion rate and rounded down to a supported euro price point. If youâve chosen Bulgaria as the base storefront for your app or InâApp Purchases, pricing in other storefronts may be updated to maintain equalization with the new euro price.
- Other apps or InâApp Purchases: Pricing on the Bulgaria storefront will be converted to the euro using the standard equalization for euro markets where App Store sales are subject to valueâadded tax (VAT).
You can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Autoârenewable subscriptions wonât be interrupted. Subscription prices will be automatically updated for each current, preserved, and future scheduled price to euros. Customers with autoârenewable subscriptions will receive an email informing them of the currency change.
Per the Euro Act, youâll need to communicate both the lev and euro prices for InâApp Purchases to customers within your app for the duration of the transitional period from now until the end of the transition period on August 8, 2026. This messaging should appear anywhere prices are displayed.
Your Sales and Trends reports and monthly financial reports will also be updated. For Sales and Trends, euros will be indicated for transactions after the currency change. For monthly financial reports, earnings from sales that occurred before the currency change will be in the BG report, and earnings from sales after the currency change will be in the EU report.
All refunds issued after January 1, 2026, will be refunded in euros, regardless of the date of purchase.
Get ready with the latest beta releases
The beta versions of iOS 26.1, iPadOS 26.1, macOS 26.1, tvOS 26.1, visionOS 26.1, and watchOS 26.1 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 26.0.1.
App Store submissions now open for the latest OS releases
iOS 26, iPadOS 26, macOS Tahoe 26, tvOS 26, visionOS 26, and watchOS 26 will soon be available to customers worldwide â which means you can now submit apps and games that take advantage of Appleâs broadest design update ever.
Build your apps and games using the Xcode 26 Release Candidate and latest SDKs, test with TestFlight, and submit for review to the App Store. By taking advantage of the new design and Liquid Glass, the Foundation Models framework, the new Apple Games app, and more, you can deliver even more unique experiences on Apple platforms.
Starting April 2026, apps and games uploaded to App Store Connect need to meet the following minimum requirements.
- iOS and iPadOS apps must be built with the iOS 26 & iPadOS 26 SDK or later
- tvOS apps must be built with the tvOS 26 SDK or later
- visionOS apps must be built with the visionOS 26 SDK or later
- watchOS apps must be built with the watchOS 26 SDK or later
Hello Developer: September 2025
Be ready for the new design. Whether youâre a longtime pro or a hobbyist, get inspired and make your apps and games look their best by exploring videos, downloads, and docs.
Plus, sign up for one-on-one appointments to dive into machine learning and AI, and meet the team behind the wholesome open-world game Hello Kitty Island Adventure.
Awe dropping.
Join us for a special Apple Event on September 9 at 10 a.m. PT.
Watch on apple.com, Apple TV, or YouTube Live.
Tax and Price Updates for Apps, In-App Purchases, and Subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax and price updates
As of August 21:
Your proceeds from the sale of eligible apps and InâApp Purchases have been modified in:
- Brazil: Imposto sobre OperaçÔes Financeiras (IOF) of 3.5%
- Canada: Digital services tax (DST) no longer applicable
- Estonia: Valueâadded tax (VAT) rate increase from 22% to 24%
- Romania:
- VAT rate increase from 19% to 21%
- Reduced VAT rate increase from 5% to 11% for news, magazines, books, and audiobooks
- The Philippines: VAT introduction of 12% for developers based outside of the Philippines
- Vietnam:
- Organizations based outside of Vietnam: VAT rate increase from 5% to 10%.
- Individual developers based outside of Vietnam: Personal income tax (PIT) introduction of 5%, replacing the corporate income tax (CIT). Reduced VAT rate of 0% for news, magazines, and books no longer applies and all content will be taxed at the standard rate.
- Organizations based in Vietnam: Apple will no longer remit foreign contractor tax (FCT) on sales to end customers. FCT of 5% introduced on Appleâs commission.
- Individual developers based in Vietnam: Personal income tax (PIT) introduction of 2%, replacing the corporate income tax (CIT). FCT of 5% introduced on Appleâs commission. Reduced VAT rate of 0% for news, magazines, and books no longer applies and all content will be taxed at the standard rate.
Exhibit B and C of the Paid Applications Agreement will be updated to indicate that Apple collects and remits applicable taxes in the Philippines, and the relevant tax changes in Vietnam.Âč
Beginning September 8:
Pricing for apps and In-App Purchases will be updated for the Philippines and Vietnam if you havenât selected one of these storefronts as the base storefront for your app or InâApp Purchase.ÂČ These updates also consider VAT introductions and changes listed in the tax updates section above.
If youâve selected the Philippines or Vietnam as the base storefront for your app or In-App Purchase, prices wonât change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your InâApp Purchase is an autoârenewable subscription. Prices also wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your appâs base country or region
Pricing and availability start times by country or region
Set a price for an In-App Purchase
Learn more about your proceeds
Âč Translations of the updated agreement will be available on the Apple Developer website within one month.
ÂČ Excludes auto-renewable subscriptions.
Hello Developer: August 2025
Sign up for new Apple developer workshops all over the world. Plus, download Apple UI design kits for Figma and Sketch, meet the Italy-based team behind Sunlitt, and more.
Updated age ratings in App Store Connect
The App Store is designed to be a safe and trusted place for all ages, including children. The age rating system for apps and games has been updated in order to provide people with more granular age ratings. Weâve also introduced new age rating questions to help identify sensitive content in your app and added the ability to set a higher rating to reflect your appâs minimum age requirement. Ratings for all apps and games on the App Store have been automatically updated to align with this new system and will be reflected on Apple devices running beta versions of iOS 26, iPadOS 26, macOS Tahoe 26, tvOS 26, visionOS 26, and watchOS 26.
The updated age rating system adds 13+, 16+, and 18+ to the existing 4+ and 9+ ratings. Age ratings are assigned to each country or region and may vary based on region-specific suitability standards.
Weâve introduced a new set of required questions to the ratings questionnaire for all apps. These new questions cover:
- In-app controls.
- Capabilities.
- Medical or wellness topics.
- Violent themes in your app or game.
Your answers to these questions will help Apple better calculate a rating and help you deliver an age-appropriate experience.
If your app has a policy requiring a higher minimum user age than the rating assigned by Apple, you can set a higher age rating after you respond to the age ratings questions. You can view the age rating for each of your apps under the updated system and respond to the new questions for each app in the App Information section in App Store Connect.
As a reminder, you must consider how all app features, including AI assistants and chatbot functionality, impact the frequency of sensitive content appearing within your app to make sure it receives the appropriate rating. All apps are subject to the App Review Guidelines, such as the safety guidelines regarding objectionable content or user generated content, and must abide by all applicable local laws and regulations, like the Childrenâs Online Privacy Protection Act (âCOPPAâ), and the European Unionâs General Data Protection Regulation (âGDPRâ).
Please provide responses to the updated age rating questions for each of your apps by January 31, 2026, to avoid an interruption when submitting your app updates in App Store Connect.
New 64-bit requirement for watchOS apps
Beginning April 2026, watchOS apps uploaded to App Store Connect must also include 64-bit support and be built with the watchOS 26 SDK. To enable 64-bit support in your project, we recommend using the default Xcode build setting of âStandard architecturesâ to build a single binary with 64-bit code.
You can test ARM64 compatibility for your apps in the Xcode Simulator, and on Apple Watch Series 9 or 10, or Apple Watch Ultra 2 running watchOS 26 beta.
iOS and iPadOS 26 design kits are here
Apple Ul design kits for Figma and Sketch are now available for iOS and iPadOS 26.
Updates for apps in the European Union
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, weâre introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions â an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developerâs website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 5
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, weâre introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions â an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developerâs website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 4
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, weâre introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions â an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developerâs website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 3
The European Commission has required Apple to make a series of additional changes under the Digital Markets Act:
Communication and promotion of offers
- Today, weâre introducing updated terms that let developers with apps in the European Union storefronts of the App Store communicate and promote offers for purchase of digital goods or services available at a destination of their choice. The destination can be a website, alternative app marketplace, or another app, and can be accessed outside the app or within the app via a web view or native experience.
- App Store apps that communicate and promote offers for digital goods or services will be subject to new business terms for those transactions â an initial acquisition fee, store services fee, and for apps on the StoreKit External Purchase Link Entitlement (EU) Addendum, the Core Technology Commission (CTC). The CTC reflects value Apple provides developers through ongoing investments in the tools, technologies, and services that enable them to build and share innovative apps with users.
- Music streaming apps on the App Store in the European Economic Area (EEA) wanting to use the Music Streaming Services Entitlement (EEA) can use these options.
Update to Business Terms for Apps in the European Union
- By January 1, 2026, Apple plans to move to a single business model in the EU for all developers. Under this single business model, Apple will transition from the Core Technology Fee (CTF) to the CTC on digital goods or services. The CTC will apply to digital goods or services sold by apps distributed from the App Store, Web Distribution, and/or alternative marketplaces.
- Apps currently under the Alternative Terms Addendum for Apps in the EU continue to be subject only to the CTF until the transition to the CTC is fully implemented next year. At that time, qualifying transactions will be subject to the CTC, and the CTF will no longer apply. Additional details regarding this transition will be provided at a later date.
User Experience Update
- Beginning with iOS 18.6 and iPadOS 18.6, iOS and iPadOS will provide an updated user experience in the EU for installing alternative marketplaces or apps from a developerâs website. Additionally, later this year, we will provide an API which will allow developers to initiate the download of alternatively distributed apps they publish from within their app.
To learn more, view Communication and promotion of offers on the App Store in the EU. To read the full terms, view the Alternative Terms Addendum for Apps in the EU or the StoreKit External Purchase Link Entitlement Addendum for EU Apps. You can also request a 30-minute online appointment to ask questions and provide feedback about these changes.
Today @ WWDC25: Day 2
Welcome to Day 2 at WWDC25! Watch the Platforms State of the Union recap, then dive into all the updates to Swift, SwiftUI, and Xcode through group labs and video sessions.
WWDC25 Platforms State of the Union Recap Watch nowTodayâs group labs
Developer Tools group lab View now Swift group lab View now Metal & game technologies group lab View now Camera & Photos frameworks group lab View nowFind out whatâs new for Apple developers
Discover the latest advancements on all Apple platforms. With incredible new features in iOS, iPadOS, macOS, tvOS, visionOS, and watchOS, and major enhancements across languages, frameworks, tools, and services, you can create even more unique experiences in your apps and games.
Updated agreements and guidelines now available
The Apple Developer Program License Agreement and App Review Guidelines have been revised to support new features and updated policies, and to provide clarification. Please review the changes below.
Apple Developer Program License Agreement
- Section 3.3.3(D): Updated language on requirements for data and privacy.
- Section 3.3.3(N): Updated requirements for use of the ID Verifier APIs.
- Definitions, 3.3.3(P): Specified requirements for use of the Declared Age Range API.
- Definitions, 3.3.7(G): Specified requirements for use of the Wi-Fi Aware framework.
- Definitions, 3.3.7(H): Specified requirements for use of the TelephonyMessagingKit APIs.
- Definitions, 3.3.7(I): Specified requirements for use of the Default Dialer APIs.
- Definition, Section 3.3.8(H), Attachment 11: Specified requirements for use of EnergyKit.
- Definitions, 3.3.8(I): Specified requirements for use of the Foundation Models framework.
- Definitions, Attachment 4: Specified requirements for use of the iCloud Extended Share APIs.
- Section 6.4: Removed language on Bitcode submissions as it is no longer applicable, and replaced it with terms regarding iOS app widgets on CarPlay.
- Section 7.4(B): Updated and clarified requirements for TestFlight related to digital purchases and tester invitations.
- Section 7.7: Updated language on customization of icons and widgets.
- Section 7.8: Specified terms related to the Apple Games app.
- Attachment 6: Updated terms regarding the entity that distributes the map in China.
App Review Guidelines
- 3.1.2(a), bullet 2: This language has been deleted (âYou may offer a single subscription that is shared across your own apps and servicesâ).
- 3.1.2(a), bullet 5: This language has been relocated to Guideline 3.2.2(x).
- 3.2.1(viii): Clarified that financial apps must have necessary licensing and permissions in the locations where developers make them available.
- 3.2.2(x): This new guideline contains the language relocated from Guideline 3.1.2(a), bullet 5, and permits developers to otherwise incentivize users to take specific actions within app.
Please sign in to your account to accept the updated Apple Developer Program License Agreement.
Translations of the guidelines will be available on Apple Developer website within one month.
Today @ WWDC25: Day 1
WWDC25 is here! Watch a quick welcome video to help you get started, then dive into sessions and sign up for tomorrowâs group labs.
Welcome to WWDC25 Watch nowTuesdayâs group labs
Developer Tools group lab View now Swift group lab View now Metal & game technologies group lab View now Camera & Photos frameworks group lab View nowIntroducing the 2025 Apple Design Award winners and finalists
An artistic puzzler with a wildlife twist. A translation app powered by machine learning and stickers. And a card game thatâs been on quite a run. Say hello to the wildly inventive crop of 2025 Apple Design Award honorees.
Hello Developer: June 2025
WWDC25 is just days away! Hereâs everything you need to get ready â and a big announcement to start things off. Say hello to the wildly inventive crop of 2025 Apple Design Award winners and finalists.
Sleek peek.
WWDC25 is almost here! Find out how to tune in to the Keynote and Platforms State of the Union on Monday, June 9.
Tax and Price updates for Apps, In-App Purchases, and Subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax and price updates
As of May 16:
Your proceeds from the sale of eligible apps and InâApp Purchases have been modified in Brazil to account for the ContribuiçÔes de Intervenção no DomĂnio EconĂŽmico (CIDE) tax introduction of 10% for developers based outside of Brazil.
Beginning June 2:
Pricing for apps and In-App Purchases will be updated for Brazil and Kazakhstan if you havenât selected one of these storefronts as the base storefront for your app or InâApp Purchase.Âč The updates in Brazil also consider the 10% CIDE tax introduction.
If youâve selected Brazil or Kazakhstan as the base storefront for your app or In-App Purchase, prices wonât change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your InâApp Purchase is an autoârenewable subscription. Prices also wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Additional upcoming changes
Beginning August 4:
All auto-renewable subscription price increases in Austria, Germany, and Poland will require customers to consent to the new price for their subscription to continue renewing.
- Price increases scheduled with a start date on or after August 4: All customers must consent to the new price. If a subscriber doesnât agree to the new price or takes no action, Apple will continue to request consent approximately weekly through email, push notifications, and in-app messaging until their subscription expires at the end of their current billing cycle.
- Price increases scheduled with a start date before August 4: Current notice criteria will remain in effect, even if the renewal occurs after August 4 (for annual subscriptions, renewal could be as late as August 2026). See criteria, noting that consent may apply to customers depending on the size or velocity of your price increases.
To help ensure a smooth transition, we recommend avoiding scheduling price increases with a start date between August 2 and August 4.
Learn more about managing your prices
View or edit upcoming price changes
Edit your appâs base country or region
Pricing and availability start times by country or region
Set a price for an In-App Purchase
Learn more about your proceeds
Âč Excludes auto-renewable subscriptions.
Hello Developer: May 2025
In this edition: Join us to learn how to make your apps more accessible to everyone. Plus, check out our new and refreshed Pathways, and uncover the time-traveling secrets of the Apple Design Award-winning game The Wreck.
Random access memories: Inside the time-shifting narrative of The Wreck
The Wreck is filed under games, but itâs also been called a visual novel, an interactive experience, and a playable movie. Florent Maurin is OK with all of it. âI like to think weâre humbly participating in expanding the idea of what a video game can be,â he says.
Maurin is the co-writer, designer, and producer of The Wreck â and here weâll let you decide what to call it. The Wreck tells the tale of Junon, a writer whoâs abruptly called to a hospital to make a life-changing decision involving her mother. The story is anchored by the accident that lends the game its name, but the ensuing narrative is splintered, and begins to take shape only as players navigate through seemingly disconnected scenes that can be viewed multiple times from different perspectives. The Wreck is far from light. But its powerful story and unorthodox mechanics combine for a unique experience.
âWe tried to make a game thatâs a bit off the beaten path,â says Maurin, whoâs also the president and CEO of The Pixel Hunt studio, âand hopefully it connects with people.â
ADA FACT SHEET
The Wreck- Winner: Social impact
- Team: The Pixel Hunt
- Available on: iPhone, iPad
- Team size: 4
Maurin is a former childrenâs journalist who worked at magazines and newspapers in his native France. After nearly 10 years in the field, he pivoted to video games, seeing them as a different way to share real stories about real people. âReality is a source of inspiration in movies, novels, and comic books, but itâs almost completely absent in the gaming landscape,â he says. âWe wanted to challenge that.â
Founded in 2014, The Pixel Hunt has released acclaimed titles like the App Store Awardâwinning historical adventure Inua and the text-message adventure Bury Me, My Love. It was near the end of the development process for the latter that Maurin and his daughter were involved in a serious car accident.
âIt was honestly like a movie trope,â he says. âTime slowed down. Weird memories that had nothing to do with the moment flashed before my eyes. Later I read that the brain parses through old memories to find relevant knowledge for facing that kind of situation. It was so sudden and so intense, and I knew I wanted to make something of it. And what immediately came to mind was a game.â
Junon's interactions with the hospital staff drive the narrative in The Wreck.
But Maurin was too close to the source material; the accident had left a lasting impact, and he separated himself from the creative process. âI think I was trying to protect myself from the intensity of that feeling,â he says. âThatâs when Alex, our art director, told me, âLook, this is your idea, and I donât think itâll bloom if you donât really dig deep and own the creative direction.â And he was right.â
That was art director Alexandre Grilletta, who helmed the development team alongside lead developer Horace Ribout, animator Peggy Lecouvey, sound designers Luis and Rafael Torres, and Maurinâs sister, Coralie, who served as a âsecond brainâ during writing. (In a nice bit of serendipity, the gameâs script was written in an open-source scripting language developed by Inkle, which used it for their own Apple Design Award-winning game, Overboard, in 2022.)
Junon's sister might not be an entirely welcome presence in The Wreck.
The story of The Wreck is split into two parts. The first â what the team calls the âlast dayâ â follows Junon at the hospital while she faces her motherâs situation as well as revealing interactions with her sister and ex-husband. Maurin says the âlast dayâ was pretty straightforward from a design standpoint. âWe knew we wanted a cinematic look,â he says, âso we made it look like a storyboard with some stop-motion animation and framing. It was really nothing too fancy. The part that was way more challenging was the memories.â
Those âmemoriesâ â and the backstory they tell â employ a clever mechanism in which players view a scene as a movie and have the ability to fast-forward or rewind the scene. These memory scenes feel much different; theyâre dreamlike and inventive, with swooping camera angles, shifting perspectives, and words that float in the air. âI saw that first in What Remains of Edith Finch,â says Maurin. âI thought it was an elegant way of suggesting the thing that triggers a characterâs brain in that moment.â
Junon's thoughts are often conveyed in floating phrases that surround her in stressful moments.
Successive viewings of these memories can reveal new details or cast doubt on their legitimacy â something Maurin wrote from experience. âIâll give you an example,â he says. âWhen my parents brought my baby sister home from the hospital, I remember the exact moment they arrived in the car. Itâs incredibly vivid. But the weird part is: This memory is in the third person. I see myself tiptoeing to the window to watch them in the street â which is impossible! I rewrote my own memory for some reason, and only my brain knows why it works like that. But it feels so real.â
Throughout the development process, Maurin and team held close to the idea of a âmoving and matureâ story. In fact, early prototypes of The Wreck were more gamified â in one version, players grabbed floating items â but playtesters found the activity distracting. âIt took them out of the story,â Maurin says. âIt broke the immersion. And that was counterproductive to our goal.â
Items in The Wreck â like this tin of peppermints â often carry a larger meaning.
Maurin admits that approaching games with this mindset can be a challenge. âSome players are curious about our games and absolutely love them. Some people think, âThese donât fit the perception of what I think I enjoy.â And maybe the games are for them, and maybe theyâre not. But this is what weâve been doing for 11 years. And I think we're getting better at it.â
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Updated guidelines now available
The App Review Guidelines have been updated for compliance with a United States court decision regarding buttons, external links, and other calls to action in apps. These changes affect apps distributed on the United States storefront of the App Store, and are reflected in updates to Guidelines 3.1.1, 3.1.1(a), 3.1.3, and 3.1.3(a).
View the App Review Guidelines
Translations of the guidelines will be available on Apple Developer website within one month.
Hello Developer: April 2025
In this edition: Revisit foundational sessions, join us to dive into SwiftUI, and meet an Apple Design Award winner that defies description.
Rooms at the top: How this ADA-winning team built a title that defies description
Ask Jason Toff whether his Apple Design Award winner is a game or an app, and his answer is yes.
âThereâs no one-sentence description for Rooms, and that can be a blessing,â laughs Toff, CEO and head designer of Things, Inc. âItâs not entirely a game, and itâs not entirely a tool. Itâs more like a toy.â
Itâs also a blank canvas, cozy game, coding teacher, and social network â but weâre getting ahead of ourselves. At its heart, Rooms is a collection of user-generated 3-D spaces that feels like the open-ended world of the early internet. Start with an empty room or existing template, then fill it with an array of voxel decorations, items, pets, and avatars to create whatever space you like: a college apartment, medieval castle chamber, floating fantasy realm, pirate ship, or a Weezer concert (really), to name just a few. The only limits are the roomâs boundaries â and Rooms fans have even gotten around those. âOur 404 page is a room with no walls,â Toff says, âso people just started copying it to work around the constraint.â
ADA FACT SHEET
Rooms- Winner: Visuals and Graphics
- Team: Things, Inc.
- Available on: iOS, iPadOS
- Team size: 4
Download Rooms from the App Store
In fact, that community element is a strong point: This creative tapestry of quirky games, tranquil havens, and clever ideas has been conjured by real people, which makes Rooms a social network as well. Whatâs more, users can click on each item to reveal its underlying code, offering them more options for customization.
To create Rooms â which, incidentally, won the ADA for Visuals and Graphics in games â Toff and cofounders Nick Kruge and Bruno Oliveira threw themselves back into their childhoods. âI was obsessed with Legos as a kid,â says Toff, not unexpectedly. âI found myself wondering, âWhatâs the digital equivalent of that?ââ
Rooms isnât just about rooms; creators have plenty of ways to noodle on their ideas.
Drawing on that inspiration â as well as Toffâs experiences with Kid Pix on his dadâs 1989-era Mac â the Rooms team began envisioning something that, as Oliveira says, kept the floor low but the ceiling high. âWe wanted anyone from 4-year-olds to their grandparents to be able to use Rooms,â he says, âand that meant making something free-form and creative.â
It also meant building something that gave a sense of approachability and creativity, which led them right to voxels. âBlocks have a charm, but they can also be kind of ugly,â Toff laughs. âLuckily, Brunoâs were cute and soft, so they felt approachable and familiar.â And from Oliveiraâs side, blocks offered a practical value. âItâs much easier to do 3-D modeling with blocks,â says Oliveira. âYou can just add or remove voxels whenever you want, which lowers the bar for everyone.â
We wanted anyone from 4-year-olds to their grandparents to be able to use Rooms, and that meant making something free-form and creative.
Jason Toff, CEO and head designer of Things, Inc.
Rooms launched in 2023 as a web-based app that included 1,000 voxel objects and allowed users to write their own code. It gained traction through both word of mouth and, more directly, a video that went viral in the cozy-gaming community. âAll of a sudden, we had all these people coming,â says Oliveira, âand we realized we needed to prioritize the mobile app. Nick was like, âI think we can get feature parity with desktop on the iPhone screen,â and we basically pulled a rabbit out of a hat.â Today, the vast majority of Rooms users are on mobile, where they spend the bulk of their time editing. âWe were just shocked by how much time people were spending making rooms,â he says. âThese werenât quick five-minute projects. We did not anticipate that.â
Of course the Things, Inc. team rebuilt their own offices in Rooms.
All that building fed into a social aspect as well. Toff says most of the items in Rooms are now created, edited, and amplified by lots of different users. âHereâs a good example: We have a sway effect that makes things wave back and forth a little,â he says. âSomeone realized that if they put some branches on a tree and added that effect, the tree immediately looked alive. Now everyoneâs doing that. Thereâs a real additive effect to building in Rooms.â Today, the Rooms library contains more than 10,000 items.
Thereâs a lot of power under the hood, too. âRooms uses a Lua scripting language that runs in a C++ context,â says Oliveira, âso itâs kind of Lua, encased in C++, encased in Unity, encased in iOS.â Every room, he says, is a new Unity instance. And adding native iOS elements â like sliders on the Explore page and a bottom navigation â gives what he calls the âdesign chefâs kiss.â
An early sketch of Rooms shows how the room design came together early in the process.
Like its community, the Rooms team is used to moving fast. âOne day I said, âIt would be cool if this had a D-pad and A/B buttons,â says Toff, âand about 10 hours later Bruno was like, âHere you go.ââ On another lark, Toff mentioned that it would be fun to let users fly around their rooms, and Kruge and Oliveira promptly created a âcamera modeâ thatâs come to be known internally as the âJason-Cam.â
Thatâs satisfying to a team that simply set out to build a cutting-edge plaything. âWe always had this metaphor that Rooms was a swimming pool with a shallow side and a deep side,â says Oliveira. âIt should be fun for people dabbling in the shallow side. But it should also be amazing for people swimming in the deep end. If you just want to look at rooms, you can. But you can also dive all the way down and write complicated code. Thereâs something for everyone.â
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
WWDC25: June 9-13, 2025
Join the worldwide developer community online for a week of technology and creativity.
Be there for the reveal of the latest Apple tools, frameworks, and features. Learn to elevate your apps and games through video sessions hosted by Apple engineers and designers. Engage with Apple experts in labs and connect with the worldwide developer community. All online and at no cost.
Assassinâs Creed Shadows comes to Mac
Itâs an ice-cold late winterâs morning in Canada, but the offices of Ubisoft Quebec are ablaze with excitement.
The Ubisoft team is preparing the release of Assassinâs Creed Shadows, the 14th main entry in the series and an evolution for the franchise in nearly every detail. Itâs set in feudal 16th-century Japan, a rich and elegant period thatâs been long sought-after by fans and Ubisoft team members alike. It introduces a pair of fierce protagonists: Yasuke, a powerful warrior of African origin, and Naoe, an agile Shinobi assassin, both brought to life with attention to historical accuracy. Its world feels alive with an ever-changing dynamism thatâs apparent in everything from the shifting weather to the rotating seasons to the magical interplay of light and shadow.
And whatâs more, itâs set to release on Mac the same day it arrives on PCs and consoles.
âItâs been a longtime dream to bring the game to Mac,â says Ubisoft executive producer Marc-Alexis CĂŽtĂ©, who debuted the game on Mac during the WWDC24 Keynote. âItâs incredible that I can now open a MacBook Pro and get this level of immersion.â Shadows will also be coming later to iPad with M-series chips.
Naoe, one of the gameâs two protagonists, is an agile assassin whoâs at her best when striking from the shadows.
Today marks one of the first times that the gaming community will get its hands on Shadows, and to celebrate the occasion, the Ubisoft offices â a mix of cozy chalet-worthy reclaimed wood and wide-open windows that afford a view of snowy Quebec City rooftops â have been reskinned with an Assassinâs Creed theme, including a display that emphasizes the heft of Yasukeâs weapons, especially an imposing-looking 13-pound model of the characterâs sword. (On this day, the display is hosted by associate game director Simon Lemay-Comtois, who appears quite capable of wielding it.)
Download Assassin's Creed Shadows from the Mac App Store
CĂŽtĂ© calls Shadows his teamâs âmost ambitiousâ game. In crafting the gameâs expansive world, Ubisoftâs development team took advantage of an array of advanced Mac technologies: Metal 3 (working in concert with Ubisoftâs next-generation Anvil engine), Apple silicon, and a mix of HDR support and real-time ray tracing on Macs with M3 and M4 that CĂŽtĂ© says was âtransformativeâ in creating the gameâs immersion.
Itâs been a longtime dream to bring the game to Mac.
Marc-Alexis CÎté, Ubisoft executive producer
âSeeing those millions of lines of code work natively on a Mac was a feeling thatâs hard to describe,â CĂŽtĂ© says. âWhen you look at the gameâs performance, the curve Apple is on with successive improvements to the M-series chips year after year, and the way the game looks on an HDR screen, youâre like, âIs this real?ââ
Assassinâs Creed Shadows is a balance of the technical and creative. For the former, associate technical director Mathieu Belanger says the capabilities of Mac laid the groundwork for technical success. âThe architecture of the hardware is so well done, thanks in part to the unified memory between the GPU and CPU. That made us think the future is bright for gaming on the platform. So many things about doing this on Mac were great right out of the box.â
Naoeâs counterpart, Yasuke, prefers the use of brute force.
On the creative side, Ubisoft creative director Jonathan Dumont focused on a different opportunity. âThe important thing was: Does this feel right? Is it what we want to send to players? And the answer was yes.â
The creative teamâs goal was nothing short of âmaking this world feel alive,â says Martin Bedard, a 20-year Ubisoft veteran who served as the gameâs technology director (and is very good at playing as Naoe). âYouâre put into a moment that really existed,â he says. âThis story is your playground.â
There are also fluffy kittens. Weâll get to those.
The ever-changing seasons lend an incredible variety to the gameâs environments.
And thereâs tremendous power behind the beauty, because the gameâs biomes, seasons, weather, and lighting are all dynamic creations. The sunset hour bathes the mountains in soft purple light; the sunâs rays float in through leaves and temple roofs. Pretty much every room has a candle in it, which means the light is always changing. âLook at the clouds here,â says Bedard, pointing at the screen. âThatâs not a rendering. These are all fluid-based cloud simulations.â
âJapan feels like itâs 80 percent trees and mountains,â says Dumont. âIf youâre building this world without the rain, and the winds, and the mountains, it doesnât feel right.â
Wherever you are, wherever you go, everything is beautiful and alive.
Mathieu Belanger, associate technical director
And those winds? âWe developed a lot of features that were barely possible before, and one of them was a full simulation of the wind, not just an animation,â says Belanger. âWe even built a humidity simulation that gathers clouds together.â For the in-game seasons, Ubisoft developed an engine that depicted houses, markets, and temples, in ever-changing conditions. âThis was all done along the way over the past four years,â he says.
To pursue historical accuracy, Dumont and the creative team visited Japan to study every detail, including big-picture details (like town maps) to very specific ones (like the varnish that would have been applied to 16th-century wood). It wasnât always a slam dunk, says CĂŽtĂ©: In one visit, their Japanese hosts recommended a revision to the light splashing against the mountains. âWe want to get all those little details right,â he says. (A âfull-immersion version,â entirely in Japanese with English subtitles, is available.)
To recreate the world of 16th-century Japan, the Ubisoft creative visited Japan to study every detail.
Ubisoftâs decision to split the protagonist into two distinct characters with different identities, skill sets, origin stories, and class backgrounds came early in the process. (âThat was a fun day,â laughs Belanger.) Ubisoft team members emphasize that choosing between Naoe and Yasuke is a matter of personal preference â lethal subtlety vs. brute force. Players can switch between characters at any time, and, as you might suspect, the pair grows stronger together as the story goes on. Much of Naoeâs advantage comes from her ability to linger in the gameâs shadows â not just behind big buildings, but wherever the scene creates a space for her to hide. âThe masterclass is clearing out a board without being spotted once,â says Bedard.
(The Hideout is) peaceful. You can say, âI feel like putting some trees down, seeing what I collected, upgrading my buildings, and petting the cats.â
Jonathan Dumont, Ubisoft creative director
Which brings us to the Hideout, Naoe and Yasukeâs home base and a bucolic rural village that acts as a zen-infused respite from the ferocity of battle. âItâs a place that welcomes you back,â says Dumont. Itâs eminently customizable, both from a game-progression standpoint but also in terms of aesthetics. Where the battle scenes are a frenzy of bruising combat or stealth attacks, the Hideout is a refuge for supplies, artwork, found objects, and even a furry menagerie of cats, dogs, deer, and other calming influences. âThere are progressions, of course,â says Dumont, âbut itâs peaceful. You can say, âI feel like putting some trees down, seeing what I collected, upgrading my buildings, and petting the cats.â
âThe kittens were a P1 feature,â laughs associate game director Dany St-Laurent.
Yasuke prepares to face off against an opponent in what will likely be a fruitful battle.
Yet for all those big numbers, Dumont says the game boils down to something much simpler. âI just think the characters work super-well together,â he says. âItâs an open-world game, yes. But at its core, it features two characters youâll like. And the game is really about following their journey, connecting with them, exploring their unique mysteries, and seeing how they flow together. And I think the way in which they join forces is one of the best moments in the franchise.â
And if the Ubisoft team has its way, there will be plenty more moments to come. âI think the game will scale for years to come on the Mac platform,â says CĂŽtĂ©. âGames can be more and more immersive with each new hardware release. Weâre trying to create something here where more people can come with day-one games on the Mac, because I think itâs a beautiful platform.â
Hello Developer: March 2025
In this edition: An incredible AAA game comes to Mac. Plus, the latest on International Womenâs Day activities, WeChat, and more.
Apple Developer is now on WeChat
Check out the official Apple Developer WeChat account to find news, announcements, and upcoming activities for the developer community.
Get ready with the latest beta releases
The beta versions of iOS 18.4, iPadOS 18.4, macOS 15.4, tvOS 18.4, visionOS 2.4, and watchOS 11.4 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 16.3.
As previewed last year, iOS 18.4 and iPadOS 18.4 include support for default translation apps for all users worldwide, and default navigation apps for EU users.
Beginning April 24, 2025, apps uploaded to App Store Connect must be built with Xcode 16 or later using an SDK for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.
New requirement for apps on the App Store in the European Union
As of today, apps without trader status have been removed from the App Store in the European Union (EU) until trader status is provided and verified by Apple.
Account Holders or Admins in the Apple Developer Program will need to enter this status in App Store Connect to comply with the Digital Services Act.
New features for APNs token authentication are now available
You can now take advantage of upgraded security options when creating new token authentication keys for the Apple Push Notification service (APNs).
Team-scoped keys enable you to restrict your token authentication keys to either development or production environments, providing an additional layer of security and ensuring that keys are used only in their intended environments.
Topic-specific keys provide more granular control by enabling you to associate each key with a specific bundle ID, allowing for more streamlined and organized key management. This is particularly beneficial for large organizations that manage multiple apps across different teams.
Your existing keys will continue to work for all push topics and environments. At this time, you donât have to update your keys unless you want to take advantage of the new capabilities.
For detailed instructions on how to secure your communications with APNs, read Establishing a token-based connection to APNs.
Upcoming changes to offers and trials for subscriptions in South Korea
Starting February 14, 2025, new regulatory requirements in South Korea will apply to all apps with offers and trials for auto-renewing subscriptions.
To comply, if you offer trials or offers for auto-renewing subscriptions to your app or game, additional consent must be obtained for your trial or offer after the initial transaction. The App Store will help to get consent by informing the affected subscribers with an email, push notification, and in-app price consent sheet, and asking your subscribers to agree to the new price.
This additional consent must be obtained from customers within 30 days from the payment or conversion date for:
- Free to paid trials
- Discounted subscription offers to standard-price subscriptions
Apps that do not offer a free trial or discounted offer before a subscription converts to the regular price are not affected.
Tax and price updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax and pricing updates for FebruaryAs of February 6:
Your proceeds from the sale of eligible apps and InâApp Purchases have been modified in:
- Azerbaijan: value-added tax (VAT) introduction of 18%
- Peru: VAT introduction of 18%
- Slovakia: Standard VAT rate increase from 20% to 23%
- Slovakia: Reduced VAT rate introduction of 5% for ebooks
- Estonia: Reduced VAT rate increase from 5% to 9% for news publications, magazines, and other periodicals
- Finland: Reduced VAT rate increase from 10% to 14% for ebooks
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Azerbaijan and Peru.Âč
As of February 24:
Pricing for apps and In-App Purchases will be updated for the Azerbaijan and Peru storefronts if you havenât selected one of these as the base for your app or InâApp Purchase.ÂČ These updates also consider VAT introductions listed in the tax updates section above.
If youâve selected the Azerbaijan or Peru storefront as the base for your app or In-App Purchase, prices wonât change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your InâApp Purchase is an autoârenewable subscription. Prices also wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your appâs base country or region
Pricing and availability start times by country or region
Set a price for an In-App Purchase
Beginning April 1:
As a result of last yearâs change in Japanâs tax regulations, Apple (through iTunes K.K. in Japan) is now designated as a Specified Platform Operator by the Japan tax authority. All paid apps and In-App Purchases, (including game items, such as coins) sold by non-Japan-based developers on the App Store in Japan will be subject to the platform tax regime. Apple will collect and remit a 10% Japanese consumption tax (JCT) to the National Tax Agency JAPAN on such transactions at the time of purchase. Your proceeds will be adjusted accordingly.
Please note any prepaid payment instruments (such as coins) sold prior to April 1, 2025, will not be subject to platform taxation, and the relevant JCT compliance should continue to be managed by the developer.
For specific information on how the JCT affects in-game items, see Question 7 in the Tax Agency of Japanâs Q&A about Platform Taxation of Consumption Tax.
Learn more about your proceeds
Âč Translations of the updated agreement are available on the Apple Developer website today.
ÂČ Excludes auto-renewable subscriptions.
Game distribution on the App Store in Vietnam
The Vietnamese Ministry of Information and Communications (MIC) requires games to be licensed to remain available on the App Store in Vietnam. To learn more and apply for a game license, review the regulations.
Once you have obtained your license:
- Sign in to App Store Connect.
- Enter the license number and the associated URL in the description section of your gameâs product page.
- Note that you only need to provide this information for the App Store localization displayed on the Vietnam storefront.
- Submit an update to App Review.
If you have questions on how to comply with these requirements, please contact the Authority of Broadcasting and Electronic Information (ABEI) under the Vietnamese Ministry of Information and Communications.
Hello Developer: February 2025
In this edition: The latest on developer activities, the Swift Student Challenge, the team behind Bears Gratitude, and more.
The good news bears: Inside the adorably unorthodox design of Bears Gratitude
Hereâs the story of how a few little bears led their creators right to an Apple Design Award.
Bears Gratitude is a warm and welcoming title developed by the Australian husband-and-wife team of Isuru Wanasinghe and Nayomi Hettiarachchi.
Journaling apps just donât get much cuter: Through prompts like âToday isnât over yet,â âIâm literally a new me,â and âCompliment someone,â the Swift-built app and its simple hand-drawn mascots encourage people to get in the habit of celebrating accomplishments, fostering introspection, and building gratitude. âAnd gratitude doesnât have to be about big moments like birthdays or anniversaries,â says Wanasinghe. âIt can be as simple as having a hot cup of coffee in the morning.â
ADA FACT SHEET
Bears Gratitude- Winner: Delight and Fun
- Available on: iOS, iPadOS, macOS
- Team size: 2
Download Bears Gratitude from the App Store
Wanasinghe is a longtime programmer whoâs run an afterschool tutoring center in Sydney, Australia, for nearly a decade. But the true spark for Bears Gratitude and its predecessor, Bears Countdown, came from Hettiarachchi, a Sri Lankan-born illustrator who concentrated on her drawing hobby during the Covid-19 lockdown.
Wanasinghe is more direct. âThe art is the heart of everything we do,â he says.
Bears Gratitude was developed by the Australian husband-and-wife team of Isuru Wanasinghe and Nayomi Hettiarachchi.
In fact, the art is the whole reason the app exists. As the pandemic months and drawings stacked up, Hettiarachchi and Wanasinghe found themselves increasingly attached to her cartoon creations, enough that they began to consider how to share them with the world. The usual social media routes beckoned, but given Wanasingheâs background, the idea of an app offered a stronger pull.
âIn many cases, you get an idea, put together a design, and then do the actual development,â he says. âIn our case, itâs the other way around. The art drives everything.â
The art is the heart of everything we do.
Isuru Wanasinghe, Bears Gratitude cofounder
With hundreds of drawings at their disposal, the couple began thinking about the kinds of apps that could host them. Their first release was Bears Countdown, which employed the drawings to help people look ahead to birthdays, vacations, and other marquee moments. Countdown was never intended to be a mass-market app; the pair didnât even check its launch stats on App Store Connect. âWeâd have been excited to have 100 people enjoy what Nayomi had drawn,â says Wanasinghe. âThatâs where our heads were at.â
But Countdown caught on with a few influencers and become enough of a success that the pair began thinking of next steps. âWe thought, well, weâve given people a way to look forward,â says Wanasinghe. âWhat about reflecting on the day you just had?ââ
Hettiarachchiâs art samples get a close inspection from one of her trusted associates.
Gratitude keeps the cuddly cast from Countdown, but otherwise the app is an entirely different beast. It was also designed in what Wanasinghe says was a deliberately unusual manner. âOur design approach was almost bizarrely linear,â says Wanasinghe. âWe purposely didnât map out the app. We designed it in the same order that users experience it.â
Other unorthodox decisions followed, including the absence of a sign-in screen. âWe wanted people to go straight into the experience and start writing,â he says. The home-screen journaling prompts are presented via cards that users flip through by tapping left and right. âItâs definitely a nonstandard UX,â says Wanasinghe, âbut we found over and over again that the first thing users did was flip through the cards.â
Our design approach was almost bizarrely linear. We purposely didnât map out the app. We designed it in the same order that users experience it.
Isuru Wanasinghe, Bears Gratitude cofounder
Another twist: The appâs prompts are written in the voice of the user, which Wanasinghe says was done to emphasize the personal nature of the app. âWe wrote the app as if we were the only ones using it, which made it more relatable,â he says.
Then there are the bears, which serve not only as a distinguishing hook in a busy field, but also as a design anchor for its creators. âWeâre always thinking: âInstead of trying to set our app apart, how do we make it ours?â We use apps all the time, and we know how they behave. But here we tried to detach ourselves from all that, think of it as a blank canvas, and ask, âWhat do we want this experience to be?ââ
Early design sketches for Bears Gratitude show the collection of swipe-able prompt cards.
Bears Gratitude isnât a mindfulness app â Wanasinghe is careful to clarify that neither he nor Hettiarachchi are therapists or mental health professionals. âAll we know about are the trials and tribulations of life,â he says.
But those trials and tribulations have reached a greater world. âPeople have said, âThis is just something I visit every day that brings me comfort,ââ says Wanasinghe. âWeâre so grateful this is the way we chose to share the art. Weâre plugged into peopleâs lives in a meaningful way.â
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Apply for the Swift Student Challenge now through February 23
Submissions for the Swift Student Challenge 2025 are now open through February 23. You have three more weeks to design, test, refine, and submit your app playground for consideration to be named one of 350 winners.
What to know:
- The Challenge is free to enter â you just need access to an iPad or Mac with Swift Playground or Xcode.
- The best app ideas are personal â let your passion shine through your work.
- No formal coding experience required â the Challenge is open to students of all levels.
- Your app playground doesnât need to be intricate â it should be experienced within 3 minutes or less.
Where to start:
- Explore tools and tutorials to build an incredible app playground.
- Get inspired by last yearâs Distinguished Winners, learn about their winning apps, and read about their experiences at Apple Park.
Introducing the Advanced Commerce API
The App Store facilitates billions of transactions annually to help developers grow their businesses and provide a world-class customer experience. To further support developersâ evolving business models â such as exceptionally large content catalogs, creator experiences, and subscriptions with optional add-ons â weâre introducing the Advanced Commerce API.
Developers can apply to use the Advanced Commerce API to support eligible App Store business models and more flexibly manage their In-App Purchases within their app. These purchases leverage the power of the trusted App Store commerce system, including end-to-end payment processing, tax support, customer service, and more, so developers can focus on providing great app experiences.
Apps without trader status will be removed from the App Store in the EU
Starting February 17, 2025: Due to the European Unionâs Digital Services Act, apps without trader status will be removed from the App Store in the European Union until trader status is provided and verified, if necessary.
As a reminder, Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect for apps on the App Store in the European Union in order to comply with the Digital Services Act.
Reminder: Upcoming Changes to the App Store Receipt Signing Intermediate Certificate
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate is being updated to use the SHA-256 cryptographic algorithm. This certificate is used to sign App Store receipts, which are the proof of purchase for apps and In-App Purchases.
This update is being completed in multiple phases and some existing apps on the App Store may be impacted by the next update, depending on how they verify receipts.
Starting January 24, 2025, if your app performs on-device receipt validation and doesnât support the SHA-256 algorithm, your app will fail to validate the receipt. If your app prevents customers from accessing the app or premium content when receipt validation fails, your customers may lose access to their content.
If your app performs on-device receipt validation, update your app to support certificates that use the SHA-256 algorithm; alternatively, use the AppTransaction and Transaction APIs to verify App Store transactions.
For more details, view TN3138: Handling App Store receipt signing certificate changes.
Algorithm changes to server connections for Apple Pay on the Web
Starting next month, Apple will change the supported algorithms that secure server connections for Apple Pay on the Web. In order to maintain uninterrupted service, youâll need to ensure that your production servers support one or more of the designated six ciphers before February 4, 2025.
These algorithm changes will affect any secure connection youâve established as part of your Apple Pay integration, including the following touchpoints:
- Requesting an Apple Pay payment session (Apple Pay on the Web only)
- Renewing your domain verification (Apple Pay on the Web only)
- Receiving and handling merchant token notifications for recurring, deferred, and automatic-reload transactions (Apple Pay on the Web and in app)
- Creating and updating Wallet Orders (Apple Pay on the Web and in app)
- Managing merchant onboarding via the Apple Pay Web Merchant Registration API (payment service provider (PSP) and e-commerce platforms only)
Hello Developer: January 2025
In the first edition of the new year: Bring SwiftUI to your app in Cupertino, get ready for the Swift Student Challenge, meet the team behind Oko, and more.
Walk this way: How Oko leverages AI to make street crossings more accessible
Oko is a testament to the power of simplicity.
The 2024 Apple Design Award winner for Inclusivity and 2024 App Store Award winner for Cultural Impact leverages Artificial Intelligence to help blind or low-vision people navigate pedestrian walkways by alerting them to the state of signals â âWalk,â âDonât Walk,â and the like â through haptic, audio, and visual feedback. The app instantly affords more confidence to its users. Its bare-bones UI masks a powerful blend of visual and AI tools under the hood. And itâs an especially impressive achievement for a team that had no iOS or Swift development experience before launch.
âThe biggest feedback we get is, âItâs so simple, thereâs nothing complex about it,â and thatâs great to hear,â says Vincent Janssen, one of Okoâs three Belgium-based founders. âBut we designed it that way because thatâs what we knew how to do. It just happened to also be the right thing.â
ADA FACT SHEET
From left: Willem Van de Mierop, Michiel Janssen, and Vincent Janssen are the three cofounders of Oko. The appâs name means âeye.â
Oko- Winner: Inclusivity
- Team: AYES BV
- Available on: iPhone
- Team size: 6
- Previous accolades: 2024 App Store Award winner for Cultural Impact; App Store Editorsâ Choice
Download Oko from the App Store
For Janssen and his cofounders, brother Michiel and longtime friend Willem Van de Mierop, Oko â the name translates to âeyeâ â was a passion project that came about during the pandemic. All three studied computer science with a concentration in AI, and had spent years working in their hometown of Antwerp. But by the beginning of 2021, the trio felt restless. âWe all had full-time jobs,â says Janssen, âbut the weekends were pretty boring.â Yet they knew their experience couldnât compare to that of a longtime friend with low vision, who Janssen noticed was feeling more affected as the autumn and winter months went on.
âWe really started to notice that he was feeling isolated more than others,â says Janssen. âHere in Belgium, we were allowed to go for walks, but you had to be alone or with your household. That meant he couldnât go with a volunteer or guide. As AI engineers, that got us thinking, âWell, there are all these stories about autonomous vehicles. Could we come up with a similar system of images or videos that would help people find their way around public spaces?ââ
I had maybe opened Xcode three times a few years before, but otherwise none of us had any iOS or Swift experience.
Vincent Janssen, Oko founder
The trio began building a prototype that consisted of a microcomputer, 3D-printed materials, and a small portable speaker borrowed from the Janssen brothersâ father. Today, Janssen calls it âhacky hardware,â something akin to a small computer with a camera. But it allowed the team and their friend â now their primary tester â to walk the idea around and poke at the technologyâs potential. Could AI recognize the state of a pedestrian signal? How far away could it detect a Donât Walk sign? How would it perform in rain or wind or snow? There was just one way to know. âWe went out for long walks,â says Janssen.
And while the AI and hardware performed well in their road tests, issues arose around the hardwareâs size and usability, and the team begin to realize that software offered a better solution. The fact that none of the three had the slightest experience building iOS apps was simply a hurdle to clear. âI had maybe opened Xcode three times a few years before,â says Janssen, âbut otherwise none of us had any iOS or Swift experience.â
Oko helps people navigate pedestrian walkways through interactive maps and audio, visual, and haptic feedback.
So that summer, the team pivoted to software, quitting their full-time jobs and throwing themselves into learning Swift through tutorials, videos, and trusty web searches. The core idea crystallized quickly: Build a simple app that relied on Camera, the Maps SDK, and a powerful AI algorithm that could help people get around town. âToday, itâs a little more complex, but in the beginning the app basically opened up a camera feed and a Core ML model to process the images,â says Janssen, noting that the original model was brought over from Python. âLuckily, the tools made the conversion really smooth.â (Okoâs AI models run locally on device.)
With the software taking shape, more field testing was needed. The team reached out to accessibility-oriented organizations throughout Belgium, drafting a team of 100 or so testers to âcodevelop the app,â says Janssen. Among the initial feedback: Though Oko was originally designed to be used in landscape mode, pretty much everyone preferred holding their phones in portrait mode. âI had the same experience, to be honest,â said Janssen, âbut that meant we needed to redesign the whole thing.â
The Oko team navigates through prototypes at a review session in their hometown of Antwerp, Belgium.
Other changes included amending the audio feedback to more closely mimic existing real-world sounds, and addressing requests to add more visual feedback. The experience amounted to getting a real-world education about accessibility on the fly. âWe found ourselves learning about VoiceOver and haptic feedback very quickly,â says Janssen.
Still, the project went remarkably fast â Oko launched on the App Store in December 2021, not even a year after the trio conceived of it. âIt took a little while to do things, like make sure the UI wasnât blocked, especially since we didnât fully understand the code we wrote in Swift,â laughs Janssen, âbut in the end, the app was doing what it needed to do.â
We found ourselves learning about VoiceOver and haptic feedback.
Vincent Janssen, Oko founder
The accessibility community took notice. And in the following months, the Oko team continued expanding its reach â Michiel Janssen and Van de Mierop traveled to the U.S. to meet with accessibility organizations and get firsthand experience with American street traffic and pedestrian patterns. But even as the app expanded, the team retained its focus on simplicity. In fact, Janssen says, they explored and eventually jettisoned some expansion ideas â including one designed to help people find and board public transportation â that made the app feel a little too complex.
Today, the Oko team numbers 6, including a fleet of developers who handle more advanced Swift matters. âAbout a year after we launched, we got feedback about extra features and speed improvements, and needed to find people who were better at Swift than we are,â laughs Janssen. At the same time, the original trio is now learning about business, marketing, and expansion.
At its core, Oko remains a sparkling example of a simple app that completes its task well. âItâs still a work in progress, and weâre learning every day,â says Janssen. In other words, there are many roads yet to cross.
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Get ready with the latest beta releases
The beta versions of iOS 18.3, iPadOS 18.3, macOS 15.3, tvOS 18.3, visionOS 2.3, and watchOS 11.3 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 16.2.
App Store Award winners announced
Join us in celebrating the outstanding work of these developers from around the world.
Updated Apple Developer Program License Agreement now available
Attachment 2 of the Apple Developer Program License Agreement has been amended to specify requirements for use of the In-App Purchase API. Please review the changes and accept the updated terms in your account.
View the full terms and conditions
Translations of the updated agreement will be available on the Apple Developer website within one month.
Hello Developer: December 2024
Get your apps and games ready for the holidays
The busiest season on the App Store is almost here. Make sure your apps and games are up to date and ready.
App Review will continue to accept submissions throughout the holiday season. Please plan to submit time-sensitive submissions early, as we anticipate high volume and reviews may take longer to complete from December 20-26.
App Store Award finalists announced
Every year, the App Store Awards celebrate exceptional apps and games that improve people's lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. This year, the App Store Editorial team is proud to recognize over 40 outstanding finalists. Winners will be announced in the coming weeks.
Price and tax updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax updates as of October:
Your proceeds from the sale of eligible apps and InâApp Purchases have been increased in:
- Nepal: Apple no longer remits Nepal value-added tax (VAT) for local developers and proceeds were increased accordingly.
- Kazakhstan: Apple no longer remits Kazakstan VAT for local developers and proceeds were increased accordingly.
- Madeira: Decrease of the Madeira VAT rate from 5% to 4% for news publications, magazines and other periodicals, books, and audiobooks.
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple will not remit VAT in Nepal and Kazakhstan for local developers.
Learn more about your proceeds
Price updates as of December 2:
- Pricing for apps and In-App Purchases will be updated for the Japan and TĂŒrkiye storefronts if you havenât selected one of these as the base for your app or InâApp Purchases.
If youâve selected the Japan or TĂŒrkiye storefront as the base for your app or In-App Purchase, prices wonât change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your InâApp Purchase is an autoârenewable subscription and wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your appâs base country or region
Enhancements to the App Store featuring process
Share your app or gameâs upcoming content and enhancements for App Store featuring consideration with new Featuring Nominations in App Store Connect. Submit a nomination to tell our team about a new launch, in-app content, or added functionality. If youâre featured in select placements on the Today tab, youâll also receive a notification via the App Store Connect app.
In addition, you can promote your app or gameâs biggest moments â such as an app launch, new version, or select featuring placements on the App Store â with readymade marketing assets. Use the App Store Connect app to generate Apple-designed assets and share them to your social media channels. Include the provided link alongside your assets so people can easily download your app or game on the App Store.
New Broadcast Push Notification Metrics Now Available in the Push Notifications Console
The Push Notifications Console now includes metrics for broadcast push notifications sent in the Apple Push Notification service (APNs) production environment. The consoleâs interface provides an aggregated view of the broadcast push notifications that are successfully accepted by APNs, the number of devices that receive them, and a snapshot of the maximum number of devices subscribed to your channels.
Coding in the kitchen: How Devin Davies whipped up the tasty recipe app Crouton
Letâs get this out of the way: Yes, Devin Davies is an excellent cook. âIâm not, like, a professional or anything,â he says, in the way that people say theyâre not good at something when they are.
But in addition to knowing his way around the kitchen, Davies is also a seasoned developer whose app Crouton, a Swift-built cooking aid, won him the 2024 Apple Design Award for Interaction.
Crouton is part recipe manager, part exceptionally organized kitchen assistant. For starters, the app collects recipes from wherever people find them â blogs, family cookbooks, scribbled scraps from the â90s, wherever â and uses tasty ML models to import and organize them. âIf you find something online, just hit the Share button to pull it into Crouton,â says the New Zealand-based developer. âIf you find a recipe in an old book, just snap a picture to save it.â
And when itâs time to start cooking, Crouton reduces everything to the basics by displaying only the current step, ingredients, and measurements (including conversions). Thereâs no swiping around between apps to figure out how many fl oz are in a cup; no setting a timer in a different app. Itâs all handled right in Crouton. âThe key for me is: How quickly can I get you back to preparing the meal, rather than reading?â Davies says.
ADA FACT SHEET
Crouton- Winner: Interaction
- Available on: iPhone, iPad, Mac, Apple Vision Pro, Apple Watch
- Team size: 1
Download Crouton from the App Store
Crouton is the classic case of a developer whipping up something he needed. As the de facto chef in the house, Davies had previously done his meal planning in the Notes app, which worked until, as he laughs, âit got a little out of hand.â
At the time, Davies was in his salad days as an iOS developer, so he figured he could build something that would save him a little time. (Itâs in his blood: Daviesâs father is a developer too.) "Programming was never my strong suit,â he says, âbut once I started building something that solved a problem, I started thinking of programming as a means to an end, and that helped.â
Daviesâs full-time job was his meal ticket, but he started teaching himself Swift on the side. Swift, he says, clicked a lot faster than the other languages heâd tried, especially as someone who was still developing a taste for programming. âIt still took me a while to get my head into it,â he says, âbut I found pretty early on that Swift worked the way I wanted a language to work. You can point Crouton at some text, import that text, and do something with it. The amount of steps you donât have to think about is astounding.â
I found pretty early on that Swift worked the way I wanted a language to work.
Devin Davies, Crouton
Coding with Swift offered plenty of baked-in benefits. Davies leaned on platform conventions to make navigating Crouton familiar and easy. Lists and collection views took advantage of Camera APIs. VisionKit powered text recognition; a separate model organized imported ingredients by category.
âI could separate out a roughly chopped onion from a regular onion and then add the quantity using a Core ML model,â he says. âItâs amazing how someone like me can build a model to detect ingredients when I really have zero understanding of how it works.â
Davies designed Crouton with simplicity in mind at all times. âI spent a lot of time figuring out what to leave out rather than bring in,â he says.
The app came together quickly: The first version was done in about six months, but Crouton simmered for a while before finding its audience. âMy mom and I were the main active users for maybe a year,â Davies laughs. âBut itâs really important to build something that you use yourself â especially when youâre an indie â so thereâs motivation to carry on.â
Davies served up Crouton updates for a few years, and eventually the app gained more traction, culminating with its Apple Design Award for Interaction at WWDC24. Thatâs an appropriate category, Davies says, because he believes his approach to interaction is his appâs special sauce. âMy skillset is figuring out how the pieces of an app fit together, and how you move through them from point A to B to C,â he says. âI spent a lot of time figuring out what to leave out rather than bring in.â
Crouton recipes can be imported from blogs, cookbook, scraps of paper, or anywhere else they might be found.
Davies hopes to use the coming months to explore spicing up Crouton with Apple Intelligence, Live Activities on Apple Watch, and translation APIs. (Though Crouton is his primary app, heâs also built an Apple Vision Pro app called Plate Smash, which is presumably very useful for cooking stress relief.)
But itâs important to him that any new features or upgrades pair nicely with the current Crouton. âIâm a big believer in starting out with core intentions and holding true to them,â he says. âI donât think that the interface, over time, has to be completely different.â
My skillset is figuring out how the pieces of an app fit together, and how you move through them from point A to B to C.
Devin Davies, Crouton
Because itâs a kitchen assistant, Crouton is a very personal app. Itâs in someoneâs kitchen at mealtime, itâs helping people prepare means for their loved ones, itâs enabling them to expand their culinary reach. It makes a direct impact on a personâs day. Thatâs a lot of influence to have as an app developer â even when a recipe doesnât quite pan out.
âSometimes Iâll hear from people who discover a bug, or even a kind of misunderstanding, but theyâre always very kind about it,â laughs Davies. âTheyâll tell me, âOh, I was baking a cake for my daughterâs birthday, and I put in way too much cream cheese and I ruined it. But, great app!ââ
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Hello Developer: November 2024
In this edition: The Swift Pathway, new developer activities around the world, and an interview with the creator of recipe app Crouton.
Upcoming changes to the App Store Receipt Signing Intermediate Certificate
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate is being updated to use the SHA-256 cryptographic algorithm. This certificate is used to sign App Store receipts, which are the proof of purchase for apps and In-App Purchases.
This update is being completed in multiple phases and some existing apps on the App Store may be impacted by the next update, depending on how they verify receipts.
Starting January 24, 2025, if your app performs on-device receipt validation and doesn't support a SHA-256 algorithm, your app will fail to validate the receipt. If your app prevents customers from accessing the app or premium content when receipt validation fails, your customers may lose access to their content.
If your app performs on-device receipt validation, update your app to support certificates that use the SHA-256 algorithm; alternatively, use the AppTransaction and Transaction APIs to verify App Store transactions.
For more details, view TN3138: Handling App Store receipt signing certificate change.
TestFlight enhancements to help you reach testers
Beta testing your apps, games, and App Clips is even better with new enhancements to TestFlight. Updates include:
- Redesigned invitations. TestFlight invitations now include your beta app description to better highlight new features and content your app or game offers to prospective testers. Apps and games with an approved version thatâs ready for distribution can also include their screenshots and app category in their invite. Weâve also added a way for people to leave feedback if they didnât join your beta, so you can understand why they didnât participate.
- Tester enrollment criteria. You can choose to set criteria, such as device type and OS versions, to more easily enroll qualified testers via a public link to provide more relevant feedback on your invite.
- Public link metrics. Find out how successful your public link is at enrolling testers for your app with new metrics. Understand how many testers viewed your invite in the TestFlight app and chose to accept it. If youâve set criteria for the public link, you can also view how many testers didnât meet the criteria.
To get started with TestFlight, upload your build, add test information, and invite testers.
Get ready with the latest beta releases
The beta versions of iOS 18.2, iPadOS 18.2, and macOS 15.2 are now available. Get your apps ready by confirming they work as expected on these releases. And make sure to build and test with Xcode 16.2 beta to take advantage of the advancements in the latest SDKs.
As previewed earlier this year, changes to the browser choice screen, default apps, and app deletion for EU users, as well as support in Safari for exporting user data and for web browsers to import that data, are now available in the beta versions of iOS 18.2 and iPadOS 18.2.
These releases also include improvements to the Apps area in Settings first introduced in iOS 18 and iPadOS 18. All users worldwide will be able to manage their default apps via a Default Apps section at the top of the Apps area. New calling and messaging defaults are also now available for all users worldwide.
Following feedback from the European Commission and from developers, in these releases developers can develop and test EU-specific features, such as alternative browser engines, contactless apps, marketplace installations from web browsers, and marketplace apps, from anywhere in the world. Developers of apps that use alternative browser engines can now use WebKit in those same apps.
View details about the browser choice screen, how to make an app available for users to choose as a default, how to create a calling or messaging app that can be a default, and how to import user data from Safari.
Updated agreements now available
The Apple Developer Program License Agreement and its Schedules 1, 2, and 3 have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms in your account.
Apple Developer Program License Agreement
- Definitions, Section 3.3.3(J): Specified requirements for use of App Intents.
- Definitions, Section 3.3.5(C): Clarified requirements for use of Sign in With Apple.
- Definitions, Section 3.3.8(G): Specified requirements for use of the Critical Messaging API.
- Definitions, Sections 3.3.9(C): Clarified requirements for use of the Apple Pay APIs; updated definition of âAppleâ for use of the Apple Pay APIs.
- Attachment 2: Clarified requirements for use of the In-App Purchase API.
Schedules 1, 2, and 3
Apple Services Pte. Ltd. is now the Apple legal entity responsible for the marketing and End-User download of the Licensed and Custom Applications by End-Users located in the following regions:
- Bhutan
- Brunei
- Cambodia
- Fiji
- Korea
- Laos
- Macau
- Maldives
- Micronesia, Fed States of
- Mongolia
- Myanmar
- Nauru
- Nepal
- Papua New Guinea
- Palau
- Solomon Islands
- Sri Lanka
- Tonga
- Vanuatu
Paid Applications Agreement (Schedules 2 and 3)
Exhibit B: Indicated that Apple shall not collect and remit taxes for local developers in Nepal and Kazakhstan, and such developers shall be solely responsible for the collection and remittance of such taxes as may be required by local law.
Exhibit C:
- Section 6: Clarified that Apple will apply Korean VAT on the commissions payable by Korean developers to Apple to be deducted from remittance with respect to sales to Korean customers pursuant to local tax laws.
- Section 10: For Singaporean developers who have registered for Singapore GST and have provided their Singapore GST registration number to Apple, clarified that Apple will apply Singaporean GST on the commissions payable by Singaporean developers to Apple to be deducted from remittance with respect to sales to Singaporean customers pursuant to local tax laws.
View the full terms and conditions
Translations of the Apple Developer Program License Agreement will be available on the Apple Developer website within one month.
New requirement for app updates in the European Union
Starting today, in order to submit updates for apps on the App Store in the European Union (EU) Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect. If youâre a trader, youâll need to provide your trader information before you can submit your app for review.
Starting February 17, 2025, apps without trader status will be removed from the App Store in the EU until trader status is provided and verified in order to comply with the Digital Services Act.
Apple Push Notification service server certificate update
The Certification Authority (CA) for Apple Push Notification service (APNs) is changing. APNs will update the server certificates in sandbox on January 20, 2025, and in production on February 24, 2025. All developers using APNs will need to update their applicationâs Trust Store to include the new server certificate: SHA-2 Root : USERTrust RSA Certification Authority certificate.
To ensure a smooth transition and avoid push notification delivery failures, please make sure that both old and new server certificates are included in the Trust Store before the cut-off date for each of your application servers that connect to sandbox and production.
At this time, you donât need to update the APNs SSL provider certificates issued to you by Apple.
Hello Developer: October 2024
Get your app up to speed, meet the team behind Lies of P, explore new student resources, and more.
Masters of puppets: How ROUND8 Studio carved out a niche for Lies of P
Lies of P is closer to its surprising source material than you might think.
Based on Carlo Collodiâs 1883 novel The Adventures of Pinocchio, the Apple Design Award-winning game is a macabre reimagining of the story of a puppet who longs to be a real boy. Collodiâs story is still best known as a childrenâs fable. But itâs also preprogrammed with more than its share of darkness â which made it an appealing foundation for Lies of P director Jiwon Choi.
âWhen we were looking for stories to base the game on, we had a checklist of needs,â says Choi. âWe wanted something dark. We wanted a story that was familiar but not entirely childish. And the deeper we dove into Pinocchio, the more we found that it checked off everything we were looking for.â
ADA FACT SHEET
Lies of P- Winner: Visuals and Graphics
- Team: ROUND8 Studio (developer), NEOWIZ (publisher)
- Available on: Mac
- Team size: 100
- Previous accolades: App Store 2023 Mac Game of the Year, App Store Editorsâ Choice
Developed by the South Korea-based ROUND8 Studio and published by its parent company, NEOWIZ, Lies of P is a lavishly rendered dark fantasy adventure and a technical showpiece for Mac with Apple silicon. Yes, players control a humanoid puppet created by Geppetto. But instead of a little wooden boy with a penchant for little white lies, the gameâs protagonist is a mechanical warrior with an array of massive swords and a mission to battle through the burned-out city of Krat to find his maker â who isnât exactly the genial old woodcarver from the fable.
âThe story is well-known, and so are the characters,â says Choi. âWe knew that to create a lasting memory for gamers, we had to add our own twists.â
In the burned-out world of Lies of P, something this warm and beautiful canât be good news.
Those twists abound. The puppet is accompanied by a digital lamp assistant named Gemini â pronounced âjim-i-nee,â of course. A major character is a play on the originalâs kindly Blue Fairy. A game boss named Mad Donkey is a lot more irritable than the donkeys featured in Collodiâs story. And though nobodyâs nose grows in Lies of P, characters have opportunities to lie in a way that directly affects the storyline â and potentially one of the gameâs multiple endings.
We knew that to create a lasting memory for gamers, we had to add our own twists.
Jiwon Choi, Lies of P director
âIf you play without knowing the original story, you might not catch all those twists,â says Choi. âBut it goes the other way, too. Weâve heard from players who became curious about the original story, so they went back and found out about our twists that way.â
Thereâs nothing curious about the gameâs success: In addition to winning a 2024 Apple Design Award for Visuals and Graphics, Lies of P was named the App Storeâs 2023 Mac Game of the Year and has collected a bounty of accolades from the gaming community. Many of those call out the gameâs visual beauty, a world of rich textures, detailed lighting, and visual customization options like MetalFX upscaling and volumetric fog effects that let you style the ruined city to your liking.
Many of Collodiâs original characters have been updated for Lies of P, including the Black Rabbit Brotherhood, who appear to be hopping mad.
For that city, the ROUND8 team added another twist by moving the story from its original Italian locale to the Belle Ăpoque era of pre-WWI France. âEveryone expected Italy, and everyone expected steampunk,â says Choi, âbut we wanted something that wasnât quite as common in the gaming industry. We considered a few other locations, like the wild west, but the Belle Ăpoque was the right mix of beauty and prosperity. We just made it darker and gloomier.â
We considered a few other locations, like the wild west, but the Belle Ăpoque was the right mix of beauty and prosperity. We just made it darker and gloomier.
Jiwon Choi, Lies of P director
To create the gameâs fierce (and oily) combat, Choi and the team took existing Soulslike elements and added their own touches, like customizable weapons that can be assembled from items lying around Krat. âWe found that players will often find a weapon they like and use it until the ending,â says Choi. âWe found that inefficient. But we also know that everyone has a different taste for weapons.â
The system, he says, gives players the freedom to choose their own combinations instead of pursuing a âbestâ pre-ordained weapon. And the strategy worked: Choi says players are often found online discussing the best combinations rather than the best weapons. âThat was our intention when creating the system,â he says.
The game is set in the Belle Ăpoque, an era known for its beauty and prosperity. âWe just made it darker and gloomier,â says Choi.
Also intentional: The gameâs approach to lying, another twist on the source material. âLying in the game isnât just about deceiving a counterpart,â says Choi. âHumans are the only species who can lie to one another, so lying is about exploring the core of this character.â
Itâs also about the murky ethics of lying: Lies of P suggests that, at times, nothing is as human â or humane â as a well-intentioned falsehood.
âThe puppet of Geppetto is not human,â says Choi. âBut at the same time, the puppet acts like a human and occasionally exhibits human behavior, like getting emotional listening to music. The idea was: Lying is something a human might do. Thatâs why itâs part of the game.â
Every environment in Lies of P â including the Krat Festival, which has seen better days â is rich with desolate detail.
The Lies of P story might not be done just yet. Choi and team are working on downloadable content and a potential sequel â possibly starring another iconic character whoâs briefly teased in the gameâs ending. But in the meantime, the team is taking a moment to enjoy the fruits of their success. âAt the beginning of development, I honestly doubted that we could even pull this off,â says Choi. âFor me, the most surprising thing is that we achieved this. And that makes us think, âWell, maybe we could do better next time.ââ
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Announcing the Swift Student Challenge 2025
Weâre thrilled to announce the Swift Student Challenge 2025. The Challenge provides the next generation of student developers the opportunity to showcase their creativity and coding skills by building app playgrounds with Swift.
Applications for the next Challenge will open in February 2025 for three weeks.
Weâll select 350 Swift Student Challenge winners whose submissions demonstrate excellence in innovation, creativity, social impact, or inclusivity. From this esteemed group, weâll name 50 Distinguished Winners whose work is truly exceptional and invite them to join us at Apple in Cupertino for three incredible days where theyâll gain invaluable insights from Apple experts and engineers, connect with their peers, and enjoy a host of unforgettable experiences.
All Challenge winners will receive one year of membership in the Apple Developer Program, a special gift from Apple, and more.
To help you get ready, weâre launching new coding resources, including Swift Coding Clubs designed for students to develop skills for a future career, build community, and get ready for the Challenge.
Upcoming regional age ratings in Australia and France
Apple is committed to making the App Store a safe place for everyone â especially kids. Within the next few months, the following regional age ratings for Australia and France will be implemented in accordance with local laws. No action is needed on your part. Where required by local regulations, regional ratings will appear alongside Apple global age ratings.
Australia
Apps with any instances of simulated gambling will display an R18+ regional age rating in addition to the Apple global age rating on the App Store in Australia.
France
Apps with a 17+ Apple global age rating will also display an 18+ regional age rating on the App Store in France.
Update on iPadOS 18 apps distributed in the European Union
The App Review Guidelines have been revised to add iPadOS to Notarization.
Starting September 16:
- Users in the EU can download iPadOS apps on the App Store and through alternative distribution. As mentioned in May, if you have entered into the Alternative Terms Addendum for Apps in the EU, iPadOS first annual installs will begin to accrue and the lower App Store commission rate will apply.
- Alternative browser engines can be used in iPadOS apps.
- Historical App Install Reports in App Store Connect that can be used with our fee calculator will include iPadOS.
If youâve entered into a previous version of the following agreements, be sure to sign the latest version, which supports iPadOS:
- Alternative Terms Addendum for Apps in the EU
- Web Browser Engine Entitlement Addendum for Apps in the EU
- Embedded Browser Engine Entitlement Addendum for Apps in the EU
Learn more about the update on apps distributed in the EU
Translations of the guidelines will be available on the Apple Developer website within one month.
Win-back offers for auto-renewable subscriptions now available
You can now configure win-back offers â a new type of offer for auto-renewable subscriptions â in App Store Connect. Win-back offers allow you to reach previous subscribers and encourage them to resubscribe to your app or game. For example, you can create a pay up front offer for a reduced subscription price of $9.99 for six months, with a standard renewal price of $39.99 per year. Based on your offer configuration, Apple displays these offers to eligible customers in various places, including:
- Across the App Store, including on your product page, as well as in personalized recommendations and editorial selections on the Today, Games, and Apps tabs.
- In your app or game.
- Via a direct link you share using your own marketing channels.
- In Subscription settings.
When creating win-back offers in App Store Connect, youâll determine customer eligibility, select regional availability, and choose the discount type. Eligible customers will be able to discover win-back offers this fall.
App Store submissions now open for the latest OS releases
iOS 18, iPadOS 18, macOS Sequoia, tvOS 18, visionOS 2, and watchOS 11 will soon be available to customers worldwide. Build your apps and games using the Xcode 16 Release Candidate and latest SDKs, test them using TestFlight, and submit them for review to the App Store. You can now start deploying seamlessly to TestFlight and the App Store from Xcode Cloud. With exciting new features like watchOS Live Activities, app icon customization, and powerful updates to Swift, Siri, Controls, and Core ML, you can deliver even more unique experiences on Apple platforms.
And beginning next month, youâll be able to bring the incredible new features of Apple Intelligence into your apps to help inspire the way users communicate, work, and express themselves.
Starting April 2025, apps uploaded to App Store Connect must be built with SDKs for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.
Hello Developer: September 2024
Get your apps ready by digging into these video sessions and resources.
Explore machine learning on Apple platforms Watch now Bring expression to your app with Genmoji Watch now Browse new resourcesLearn how to make actions available to Siri and Apple Intelligence.
Need a boost?Check out our curated guide to machine learning and AI.
FEATURED
Get ready for OS updatesDive into the latest updates with these developer sessions.
Level up your games Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Bring your vision to life Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Upgrade your iOS and iPadOS apps Extend your appâs controls across the system Watch now Elevate your tab and sidebar experience in iPadOS Watch nowBrowse Apple Developer on YouTube
Get expert guidanceCheck out curated guides to the latest features and technologies.
BEHIND THE DESIGN
Rytmos: A puzzle game with a global beatFind out how Floppy Club built an Apple Design Award winner that sounds as good as it looks.
Behind the Design: The rhythms of Rytmos View nowMEET WITH APPLE
Reserve your spot for upcoming developer activities- Envision the future: Create great apps for visionOS: Find out how to build visionOS apps for a variety of use cases. (October 2, Cupertino)
- Build faster and more efficient apps: Learn how to optimize your use of Apple frameworks, resolve performance issues, and reduce launch time. (October 23, Cupertino)
Want to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Behind the Design: The rhythms of Rytmos
Rytmos is a game that sounds as good as it looks.
With its global rhythms, sci-fi visuals, and clever puzzles, the 2024 Apple Design Award winner for Interaction is both a challenge and an artistic achievement. To solve each level, players must create linear pathways on increasingly complex boards, dodging obstacles and triggering buttons along the way. Itâs all set to a world-music backdrop; different levels feature genres as diverse as Ethiopian jazz, Hawaiian slack key guitar, and Gamelan from Indonesia, just to name a few.
And hereâs the hook: Every time you clear a level, you add an instrument to an ever-growing song.
âThe idea is that instead of reacting to the music, youâre creating it,â says Asger Strandby, cofounder of Floppy Club, the Denmark-based studio behind Rytmos. âWe do a lot to make sure it doesnât sound too wild. But the music in Rytmos is entirely generated by the way you solve the puzzles.â
ADA FACT SHEET
Rytmos- Winner: Interaction
- Team: Floppy Club
- Available on: iPhone, iPad
- Team size: 5
Download Rytmos from the App Store
The artful game is the result of a partnership that dates back decades. In addition to being developers, Strandby and Floppy Club cofounder Niels Böttcher are both musicians who hail from the town of Aarhus in Denmark. âItâs a small enough place that if you work in music, you probably know everyone in the community,â laughs Böttcher.
The music in Rytmos comes mostly from traveling and being curious.
Niels Böttcher, Floppy Club cofounder
The pair connected back in the early 2000s, bonding over music more than games. âFor me, games were this magical thing that you could never really make yourself,â says Strandby. âI was a geeky kid, so I made music and eventually web pages on computers, but I never really thought I could make games until I was in my twenties.â Instead, Strandby formed bands like Analogik, which married a wild variety of crate-digging samples â swing music, Eastern European folk, Eurovision-worthy pop â with hip-hop beats. Strandby was the frontman, while Böttcher handled the behind-the-scenes work. âI was the manager in everything but name,â he says.
The band was a success: Analogik went on to release five studio albums and perform at Glastonbury, Roskilde, and other big European festivals. But when their music adventure ended, the pair moved back into separate tech jobs for several years â until the time came to join forces again. âWe found ourselves brainstorming one day, thinking about, âCould we combine music and games in some way?ââ says Böttcher. âThere are fun similarities between the two in terms of structures and patterns. We thought, âWell, letâs give it a shot.ââ
Puzzles in Rytmos â like the one set on the planet âHatetaâ â come with a little history lesson about the music being played.
The duo launched work on a rhythm game that was powered by their histories and travels. âIâve collected CDs and tapes from all over the world, so the genres in Rytmos are very carefully chosen,â says Böttcher. âWe really love Ethiopian jazz music, so we included that. Gamelan music (traditional Indonesian ensemble music thatâs heavy on percussion) is pretty wild, but incredible. And sometimes, you just hear an instrument and say, âOh, that tabla has a really nice sound.â So the music in Rytmos comes mostly from traveling and being curious.â
The game took shape early, but the mazes in its initial versions were much more intricate. To help bring them down to a more approachable level, the Floppy Club team brought on art director Niels Fyrst. âHe was all about making things cleaner and clearer,â says Böttcher. âOnce we saw what he was proposing â and how it made the game stronger â we realized, âOK, maybe weâre onto something.ââ
Success in Rytmos isn't just that you're beating a level. It's that you're creating something.
Asger Strandby, Floppy Club cofounder
Still, even with a more manageable set of puzzles, a great deal of design complexity remained. Building Rytmos levels was like stacking a puzzle on a puzzle; the team not only had to build out the levels, but also create the music to match. To do so, Strandby and his brother, Bo, would sketch out a level and then send it over to Böttcher, who would sync it to music â a process that proved even more difficult than it seems.
âThe sound is very dependent on the location of the obstacles in the puzzles,â says Strandby. âThatâs what shapes the music that comes out of the game. So weâd test and test again to make sure the sound didnât break the idea of the puzzle.â
Puzzles in Rytmos are all about getting from Point A to Point B â but things are never as simple as they seem.
The process, he says, was âquite difficultâ to get right. âUsually with something like this, you create a loop, and then maybe add another loop, and then add layers on top of it,â says Böttcher. âIn Rytmos, hitting an emitter triggers a tone, percussion sound, or chord. One tone hits another tone, and then another, and then another. In essence, youâre creating a pattern while playing the game.â
Weâve actually gone back to make some of the songs more imprecise, because we want them to sound human.
Niels Böttcher, Floppy Club cofounder
The unorthodox approach leaves room for creativity. âTwo different peopleâs solutions can sound different,â says Strandby. And when players win a level, they unlock a âjam modeâ where they can play and practice freely. "Itâs just something to do with no rules after all the puzzling,â laughs Strandby.
Yet despite all the technical magic happening behind the scenes, the actual musical results had to have a human feel. âWeâre dealing with genres that are analog and organic, so they couldnât sound electronic at all,â says Böttcher. âWeâve actually gone back to make some of the songs more imprecise, because we want them to sound human.â
Best of all, the game is shot through with creativity and cleverness â even offscreen. Each letter in the Rytmos logo represents the solution to a puzzle. The companyâs logo is a 3.5-inch floppy disk, a little nod to their first software love. (âThatâs all I wished for every birthday,â laughs Böttcher.) And both Böttcher and Strandby hope that the game serves as an introduction to both sounds and people they might not be familiar with. "Learning about music is a great way to learn about a culture,â says Strandby.
But mostly, Rytmos is an inspirational experience that meets its lofty goal. âSuccess in Rytmos isnât just that youâre beating a level,â says Strandby. âItâs that youâre creating something.â
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Price and tax updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Price updatesOn September 16:
- Pricing for apps and In-App PurchasesÂč will be updated for the Chile, Laos, and Senegal storefronts if you havenât selected one of these as the base for your app or InâApp Purchase.Âč These updates also consider valueâadded tax (VAT) introductions listed in the âTax updatesâ section below.
If youâve selected the Chile, Laos, or Senegal storefront as the base for your app or In-App Purchase, prices wonât change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your InâApp Purchase is an autoârenewable subscription and wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your appâs base country or region
Pricing and availability start times by region
Set a price for an In-App Purchase
Tax updatesAs of August 29:
Your proceeds from the sale of eligible apps and InâApp Purchases have been modified in:
- Laos: VAT introduction of 10%
- Senegal: VAT introduction of 18%
- India: Equalization levy of 2% no longer applicable
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Laos and Senegal.
Beginning in September:
Your proceeds from the sale of eligible apps and InâApp Purchases will be modified in:
- Canada: Digital services tax introduction of 3%
- Finland: VAT increase from 24% to 25.5%
Learn more about your proceeds
1: Excludes auto-renewable subscriptions.
Itâs Glowtime.
Join us for a special Apple Event on September 9 at 10 a.m. PT.
Watch on apple.com, Apple TV, or YouTube Live.
Upcoming changes to the browser choice screen, default apps, and app deletion for EU users
By the end of this year, weâll make changes to the browser choice screen, default apps, and app deletion for iOS and iPadOS for users in the EU. These updates come from our ongoing and continuing dialogue with the European Commission about compliance with the Digital Market Actâs requirements in these areas.
Developers of browsers offered in the browser choice screen in the EU will have additional information about their browser shown to users who view the choice screen, and will get access to more data about the performance of the choice screen. The updated choice screen will be shown to all EU users who have Safari set as their default browser. For details about the changes coming to the browser choice screen, view About the browser choice screen in the EU.
For users in the EU, iOS 18 and iPadOS 18 will also include a new Default Apps section in Settings that lists defaults available to each user. In future software updates, users will get new default settings for dialing phone numbers, sending messages, translating text, navigation, managing passwords, keyboards, and call spam filters. To learn more, view Update on apps distributed in the European Union.
Additionally, the App Store, Messages, Photos, Camera, and Safari apps will now be deletable for users in the EU.
Upcoming requirements for app distribution in the European Union
As a reminder, Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect for apps on the App Store in the European Union (EU) in order to comply with the Digital Services Act.
Please note these new dates and requirements:
- October 16, 2024: Trader status will be required to submit app updates. If youâre a trader, youâll need to provide your trader information before you can submit your app for review.
- February 17, 2025: Apps without trader status will be removed from the App Store in the EU until trader status is provided and verified.
Apple Entrepreneur Camp applications are now open
Apple Entrepreneur Camp supports underrepresented founders and developers, and encourages the pipeline and longevity of these entrepreneurs in technology. Attendees benefit from one-on-one code-level guidance, receive unprecedented access to Apple engineers and experts, and become part of the extended global network of Apple Entrepreneur Camp alumni.
Applications are now open for female,* Black, Hispanic/Latinx, and Indigenous founders and developers. And this year weâre thrilled to bring back our in-person programming at Apple in Cupertino. For those who canât attend in person, weâre still offering our full program online. We welcome established entrepreneurs with app-driven businesses to learn more about eligibility requirements and apply today.
Apply by September 3, 2024.
* Apple believes that gender expression is a fundamental right. We welcome all women to apply to this program.
Updates to the StoreKit External Purchase Link Entitlement
In response to the announcement by the European Commission in June, weâre making the following changes to Appleâs Digital Markets Act compliance plan. Weâre introducing updated terms that will apply this fall for developers with apps in the European Union storefronts of the App Store that use the StoreKit External Purchase Link Entitlement. Key changes include:
- Developers can communicate and promote offers for purchases available at a destination of their choice. The destination can be an alternative app marketplace, another app, or a website, and it can be accessed outside the app or via a web view that appears in the app.
- Developers may design and execute within their apps the communication and promotion of offers. This includes providing information about prices of subscriptions or any other offer available both within or outside the app, and providing explanations or instructions about how to subscribe to offers outside the application. These communications must provide accurate information regarding the digital goods or services available for purchase.
- Developers may choose to use an actionable link that can be tapped, clicked, or scanned, to take users to their destination.
- Developers can use any number of URLs, without declaring them in the appâs Info.plist.
- Links with parameters, redirects, and intermediate links to landing pages are permitted.
- Updated business terms for apps with the External Purchase Link Entitlement are being introduced to align with the changes to these capabilities.
Learn more by visiting Alternative payment options on the App Store in the European Union or request a 30-minute online consultation to ask questions and provide feedback about these changes.
Hello Developer: August 2024
Meet with Apple
Explore the latest developer activities â including sessions, consultations, and labs â all around the world.
BEHIND THE DESIGN
Creating the make-believe magic of Lost in PlayDiscover how the developers of this Apple Design Award-winning game conjured up an imaginative world of oversized frogs, mischievous gnomes, and occasional pizzas.
Behind the Design: Creating the make-believe magic of Lost in Play View now Get resourceful- Build local experiences with room tracking: Use room tracking in visionOS to provide custom interactions with physical spaces.
- Preview your appâs interface in Xcode: Iterate designs quickly and preview your appsâ displays across different Apple devices.
- Explore Apple Music Feed: Now available through the Apple Developer Program, Apple Music Feed provides bulk rich catalog metadata for developing experiences that link back to Apple Music.
- Updates to runtime protection in macOS Sequoia: Find out about updates to Gatekeeper.
- Evaluate your appâs performance: Find out whatâs working â and what you can improve â with peer group benchmark metrics across app categories, business models, and download volumes.
SESSION OF THE MONTH
Extend your Xcode Cloud workflowsDiscover how Xcode Cloud can adapt to your development needs.
Extend your Xcode Cloud workflows Watch now Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Behind the Design: Creating the make-believe magic of Lost in Play
Lost in Play is a game created by and for people who love to play make-believe.
The 2024 Apple Design Award (ADA) winner for Innovation is a point-and-click adventure that follows two young siblings, Toto and Gal, through a beautifully animated world of forbidden forests, dark caverns, friendly frogs, and mischievous gnomes. To advance through the gameâs story, players complete fun mini-games and puzzles, all of which feel like a Saturday morning cartoon: Before the journey is out, the pair will fetch a sword from a stone, visit a goblin village, soar over the sea on an enormous bird, and navigate the real-world challenges of sibling rivalry. They will also order several pizzas.
ADA FACT SHEET
Lost in Play- Winner: Innovation
- Team: Happy Juice Games, Israel
- Available on: iPhone, iPad
- Team size: 7
- Previous accolades: iPad Game of the Year (2023)
Lost in Play is the brainchild of Happy Juice Games, a small Israel-based team whose three cofounders drew inspiration from their own childhoods â and their own families. âWeâve all watched our kids get totally immersed playing make-believe games,â says Happy Juiceâs Yuval Markovich. âWe wanted to recreate that feeling. And we came up with the idea of kids getting lost, partly in their imaginations, and partly in real life.â
The team was well-equipped for the job. Happy Juice cofounders Markovich, Oren Rubin, and Alon Simon, all have backgrounds in TV and film animation, and knew what they wanted a playful, funny adventure even before drawing their first sketch. âAs adults, we can forget how to enjoy simple things like that,â says Simon, âso we set out to make a game about imagination, full of crazy creatures and colorful places.â
Toto meets a new friend in the belly of a whale in Lost in Play. At right is an early sketch of the scene.
For his part, Markovich didnât just have a history in gaming; he taught himself English by playing text-based adventure games in the â80s. âYou played those games by typing âgo northâ or âlook around,â so every time I had to do something, Iâd open the dictionary to figure out how to say it,â he laughs. âAt some point I realized, âOh wait, I know this language.ââ
The story became a matter of, âOK, a goblin village sounds fun â how do we get there?â
Yuval Markovich, Happy Juice Games cofounder
But those games could be frustrating, as anyone who ever tried to âleave houseâ or âget ye flaskâ can attest. Lost in Play was conceived from day one to be light and navigable. âWe wanted to keep it comic, funny, and easy,â says Rubin. âThatâs what we had in mind from the very beginning.â
Toto must go out on a limb to solve the ravens' puzzle in this screenshot and early sketch.
Lost in Play may be a linear experience â it feels closer to playing a movie than a sandbox game â but itâs hardly simple. As befitting a playable dream, its story feels a little unmoored, like itâs being made up on the fly. Thatâs because the team started with art, characters, and environments, and then went back to add a heroâs journey to the elements.
âWe knew weâd have a dream in the beginning that introduced a few characters. We knew weâd end up back at the house. And we knew we wanted one scene under the sea, and another in a maker space, and so on,â says Markovich. âThe story became a matter of, âOK, a goblin village sounds fun â how do we get there?ââ
Early concept sketches show the character design evolution of Toto and Gal.
Naturally, the team drew on their shared backgrounds in animation to shape the game all throughout its three-year development process â and not just in terms of art. Like a lot of cartoons, Lost in Play has no dialogue, both to increase accessibility and to enhance the storyâs illusion. Characters speak in a silly gibberish. And there are little cartoon-inspired tricks throughout; for instance, the camera shakes when something is scary. âWhen you study animation, you also study script writing, cinematography, acting, and everything else,â Markovich says. âI think thatâs why I like making games so much: They have everything.â
The best thing we hear is that itâs a game parents enjoy playing with their kids.
Oren Rubin, Happy Juice games cofounder
And in a clever acknowledgment of the realities of childhood, brief story beats return Toto and Gal to the real world to navigate practical issues like sibling rivalries. Thatâs on purpose: Simon says early versions of the game were maybe a little too cute. âEarly on, we had the kids sleeping neatly in their beds,â says Simon. âBut we decided that wasnât realistic. We added a bit more of them picking on each other, and a conflict in the middle of the game.â Still, Markovich says that even the real-world interludes keep one foot in the imaginary world. âThey may go through a park where an old lady is feeding pigeons, but then they walk left and thereâs a goblin in a swamp,â he laughs.
Strange frogs distributing swords are the basis for one of Lost in Play's many inventive puzzles.
On the puzzle side, Lost in Playâs mini-games are designed to strike the right level of challenging. The team is especially proud of the gameâs system of hints, which often present challenges in themselves. âWe didnât want people getting trapped like I did in those old adventure games,â laughs Markovich. âI loved those, but you could get stuck for months. And we didnât want people going online to find answers either.â The answer: A hint system that doesnât just hand over the answer but gives players a feeling of accomplishment, an incentive to go back for more.
It all adds up to a unique experience for players of all ages â and thatâs by design too. âThe best feedback we get is that itâs suitable for all audiences,â says Rubin, âand the best thing we hear is that itâs a game parents enjoy playing with their kids.â
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Updates to runtime protection in macOS Sequoia
In macOS Sequoia, users will no longer be able to Control-click to override Gatekeeper when opening software that isnât signed correctly or notarized. Theyâll need to visit System Settings > Privacy & Security to review security information for software before allowing it to run.
If you distribute software outside of the Mac App Store, we recommend that you submit your software to be notarized. The Apple notary service automatically scans your Developer ID-signed software and performs security checks. When your software is ready for distribution, itâs assigned a ticket to let Gatekeeper know itâs been notarized so customers can run it with confidence.
Updated guidelines now available
The App Review Guidelines have been revised to support updated policies and upcoming features, and to provide clarification.
- Updated 4.7 to clarify that PC emulator apps can offer to download games.
- Added 4.7, 4.7.2, and 4.7.3 to Notarization.
View the App Review Guidelines
Get resources and support to prepare for App Review
Translations of the guidelines will be available on the Apple Developer website within one month.
Hello Developer: July 2024
Dive into all the new updates from WWDC24
Our doors are open. Join us to explore all the new sessions, documentation, and features through online and in-person activities held in 15 cities around the world.
Join us July 22â26 for online office hours to get one-on-one guidance about your app or game. And visit the forums where more engineers are ready to answer your questions.
WWDC24 highlights View nowBEHIND THE DESIGN
Positive vibrations: How Gentler Streak approaches fitness with âhumanityâFind out why the team behind this Apple Design Award-winning lifestyle app believes success is about more than stats.
Behind the Design: How Gentler Streak approaches fitness with âhumanityâ View nowGET RESOURCEFUL
New sample code- Grow your skills with the BOT-anist: Build a multiplatform app that uses windows, volumes, and animations to create a robot botanistâs greenhouse.
- Doing the things a particle can: Add a range of visual effects to a RealityKit view by attaching a particle emitter component to an entity.
- Chart a course for Destination Video: Leverage SwiftUI to build an immersive media experience.
- Design for games: Make your game feel at home on all Apple devices.
- Take control of controls: Provide quick access to a feature of your app from Control Center, the Lock Screen, or the Action button.
- Tint your icons: Create dark and tinted app icon variants for iOS and iPadOS.
SESSION OF THE MONTH
Say hello to the next generation of CarPlay design systemLearn how the system at the heart of CarPlay allows each automaker to express their vehicleâs character and brand.
Say hello to the next generation of CarPlay design system Watch now Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Behind the Design: How Gentler Streak approaches fitness with âhumanityâ
Gentler Streak is a different kind of fitness tracker. In fact, to hear cofounder and CEO Katarina LotricÌ tell it, itâs not really a fitness tracker at all.
âWe think of it more as a lifestyle app,â says LotricÌ, from the teamâs home office in Kranj, Slovenia. âWe want it to feel like a compass, a reminder to get moving, no matter what that means for you,â she says.
ADA FACT SHEET
The appâs âGo Gentlerâ page suggests optimal workouts for a userâs day.
Gentler Streak- Winner: Social Impact
- Team: Gentler Stories d.o.o., Slovenia
- Available on: iPhone, iPad, Apple Watch
- Team size: 8
- Previous accolades: Apple Watch App of the Year (2022), Apple Design Award finalist (Visuals and graphics, 2023)
Download Gentler Streak from the App Store
Learn more about Gentler Streak
Meet the 2024 Apple Design Award winners
That last part is key. True to its name, the Apple Design Award-winning Gentler Streak takes a friendlier approach to fitness. Instead of focusing on performance â on the bigger, faster, and stronger â Gentler Streak meets people where they are, presenting workout suggestions, statistics, and encouragement for all skill levels.
âA lot of mainstream fitness apps can seem to be about pushing all the time,â LotricÌ says. âBut for a lot of people, that isnât the reality. Everyone has different demands and capabilities on different days. We thought, âCan we create a tool to help anyone know where theyâre at on any given day, and guide them to a sustainably active lifestyle?ââ
If a 15-minute walk is what your body can do at that moment, thatâs great.
Katarina LotricÌ, CEO and cofounder of Gentler Stories
To reach those goals, LotricÌ and her Gentler Stories cofounders â UI/UX designer Andrej MihelicÌ, senior developer Luka OreĆĄnik, and CTO and iOS developer Jasna Krmelj â created an app powered by an optimistic and encouraging vibe that considers physical fitness and mental well-being equally.
Fitness and workout data (collected from HealthKit) is presented in a colorful, approachable design. The appâs core functions are available for free; a subscription unlocks premium features. And an abstract mascot named Yorhart (sound it out) adds to the light touch. âYorhart helps you establish a relationship with the app and with yourself, because itâs what your heart would be telling you,â LotricÌ says.
Good news from Yorhart: This userâs needs and capabilities are being met perfectly.
Itâs working: In addition to the 2024 Apple Design Award for Social Impact, Gentler Streak was named 2022 Apple Watch App of the Year. Whatâs more, it has an award-winning ancestor: LotriÄ and OreĆĄnik won an Apple Design Award in 2017 for Lake: Coloring Book for Adults.
The trio used the success of Lake to learn more about navigating the industry. But something else was happening during that time: The team, all athletes, began revisiting their own relationships with fitness. LotricÌ suffered an injury that kept her from running for months and affected her mental health; she writes about her experiences in Gentler Streakâs editorial section. MihelicÌ had a different issue. âMy problem wasnât that I lacked motivation,â he says. âIt was that I worked out too much. I needed something that let me know when it was enough.â
Statistics are just numbers. Without knowing how to interpret them, they are meaningless.
Katarina LotricÌ, CEO and cofounder of Gentler Stories
As a way to reset, MihelicÌ put together an internal app, a simple utility that encouraged him to move but also allowed time for recuperation. âIt wasnât very gentle,â he laughs. âBut the core idea was more or less the same. It guided but it didnât push. And it wasnât based on numbers; it was more explanatory.â
Over time, the group began using MihelicÌâs app. âWe saw right away that it was sticky,â says LotricÌ. âI came back to it daily, and it was just this basic prototype. After a while, we realized, âWell, this works and is built, to an extent. Why donât we see if thereâs anything here?ââ
Gentler Streak pulls workout information from HealthKit and presents it in simple, easy-to-understand charts.
Thatâs when LotricÌ, OreĆĄnik, and Krmelj split from Lake to create Gentler Stories with MihelicÌ. "I wanted in because I loved the idea behind the whole company,â Krmelj says. âIt wasnât just about the app. I really like the app. But I really believed in this idea about mental well-being.â
Early users believed it too: The team found that initial TestFlight audience members returned at a stronger rate than expected. âOur open and return rates were high enough that we kept thinking, âAre these numbers even real?ââ laughs LotricÌ. The team found that those early users responded strongly to the âgentlerâ side, the approachable repositioning of statistics.
âWe werenât primarily addressing the audience that most fitness apps seemed to target,â says LotriÄ. âWe focused on everyone else, the people who maybe didnât feel like they belonged in a gym. Statistics are just numbers. Without knowing how to interpret them, they are meaningless. We wanted to change that and focus on the humanity.â By fall of 2021, Gentler Streak was ready for prime time.
Gentler Streak on Apple Watch brings encouragement closer than ever before.
Todayâs version of the app follows the same strategy of MihelicÌâs original prototype. Built largely in UIKit, its health data is smartly organized, the design is friendly and consistent, and features like its Monthly Summary view â which shows how youâre doing in relation to your history â focus less on comparison and more on progress, whatever that may mean. âIf a 15-minute walk is what your body can do at that moment, thatâs great,â LotricÌ says. âThat how we make people feel represented.â
The appâs social impact continues to grow. In the spring of 2024, Gentler Streak added support for Japanese, Korean, and traditional and simplified Chinese languages; previous updates added support for French, German, Italian, Spanish, and Brazilian Portuguese.
And those crucial features â fitness tracking, workout suggestions, metrics, and activity recaps â will remain available to everyone. âThat goes with the Gentler Stories philosophy,â says LotricÌ. âWeâre bootstrapped, but at the same time we know that not everyone is in a position to support us. We still want to be a tool that helps people stay healthy not just for the first two weeks of the year or the summer, but all year long.â
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Alternative payment options in the EU in visionOS 1.2
Alternative payment options are now supported starting in visionOS 1.2 for apps distributed on the App Store in the EU.
Changes for apps in the EU now available in iPadOS 18 beta 2
The changes for apps in the European Union (EU), currently available to iOS users in the 27 EU member countries, can now be tested in iPadOS 18 beta 2 with Xcode 16 beta 2.
Also, the Web Browser Engine Entitlement Addendum for Apps in the EU and Embedded Browser Engine Entitlement Addendum for Apps in the EU now include iPadOS. If youâve already entered into either of these addendums, be sure to sign the updated terms.
Learn more about the recent changes:
The App Store on Apple Vision Pro expands to new markets
Apple Vision Pro will launch in China mainland, Hong Kong, Japan, and Singapore on June 28 and in Australia, Canada, France, Germany, and the United Kingdom on July 12. Your apps and games will be automatically available on the App Store in regions youâve selected in App Store Connect.
If youâd like, you can:
- Manage the availability of your visionOS apps and compatible iPhone or iPad apps at any time.
- Request to have your app evaluated directly on Apple Vision Pro.
- Localize your product page metadata for local audiences.
You can also learn how to build native apps to fully take advantage of exciting visionOS features.
Upcoming regional age ratings in Australia and South Korea
Apple is committed to making sure that the App Store is a safe place for everyone â especially kids. Within the next few months, youâll need to indicate in App Store Connect if your app includes loot boxes available for purchase. In addition, a regional age rating based on local laws will automatically appear on the product page of the apps listed below on the App Store in Australia and South Korea. No other action is needed. Regional age ratings appear in addition to Apple global age ratings.
Australia
A regional age rating is shown if Games is selected as the primary or secondary category in App Store Connect.
- 15+ regional age rating: Games with loot boxes available for purchase.
- 18+ regional age rating: Games with Frequent/Intense instances of Simulated Gambling indicated in App Store Connect.
South Korea
A regional age rating is shown if either Games or Entertainment is selected as the primary or secondary category in App Store Connect, or if the app has Frequent/Intense instances of Simulated Gambling in any category.
- KR-All regional age rating: Apps and games with an Apple global age rating of 4+ or 9+.
- KR-12 regional age rating: Apps and games with an Apple global age rating of 12+. Certain apps and games in this group may receive a KR-15 regional age rating from the South Korean Games Ratings and Administration Committee (GRAC). If this happens, App Review will reach out to impacted developers.
- Certain apps and games may receive a KR-19 regional age rating from the GRAC. Instead of a pictogram, text will indicate this rating.
WWDC24 resources and survey
Thank you to everyone who joined us for an amazing week. We hope you found value, connection, and fun. You can continue to:
- Watch sessions at any time.
- Check out session highlights.
- Read about newly announced technologies.
- Get sample code from sessions.
- Dive into new and updated documentation.
Weâd love to know what you thought of this yearâs conference. If youâd like to tell us about your experience, please complete the WWDC24 survey.
WWDC24 highlights
Browse the biggest moments from an incredible week of sessions.
Machine Learning & AI Explore machine learning on Apple platforms Watch now Bring expression to your app with Genmoji Watch now Get started with Writing Tools Watch now Bring your app to Siri Watch now Design App Intents for system experiences Watch now Swift Whatâs new in Swift Watch now Meet Swift Testing Watch now Migrate your app to Swift 6 Watch now Go small with Embedded Swift Watch now SwiftUI & UI Frameworks Whatâs new in SwiftUI Watch now SwiftUI essentials Watch now Enhance your UI animations and transitions Watch now Evolve your document launch experience Watch now Squeeze the most out of Apple Pencil Watch now Developer Tools Whatâs new in Xcode 16 Watch now Extend your Xcode Cloud workflows Watch now Spatial Computing Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Explore game input in visionOS Watch now Bring your iOS or iPadOS game to visionOS Watch now Create custom hover effects in visionOS Watch now Work with windows in SwiftUI Watch now Dive deep into volumes and immersive spaces Watch now Customize spatial Persona templates in SharePlay Watch now Design Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Design App Intents for system experiences Watch now Design Live Activities for Apple Watch Watch now Say hello to the next generation of CarPlay design system Watch now Add personality to your app through UX writing Watch now Graphics & Games Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Bring your iOS or iPadOS game to visionOS Watch now Meet TabletopKit for visionOS Watch now App Store Distribution and Marketing Whatâs new in StoreKit and In-App Purchase Watch now Whatâs new in App Store Connect Watch now Implement App Store Offers Watch now Privacy & Security Streamline sign-in with passkey upgrades and credential managers Watch now Whatâs new in privacy Watch now App and System Services Meet the Contact Access Button Watch now Use CloudKit Console to monitor and optimize database activity Watch now Extend your appâs controls across the system Watch now Safari & Web Optimize for the spatial web Watch now Build immersive web experiences with WebXR Watch now Accessibility & Inclusion Catch up on accessibility in SwiftUI Watch now Get started with Dynamic Type Watch now Build multilingual-ready apps Watch now Photos & Camera Build a great Lock Screen camera capture experience Watch now Build compelling spatial photo and video experiences Watch now Keep colors consistent across captures Watch now Use HDR for dynamic image experiences in your app Watch now Audio & Video Enhance the immersion of media viewing in custom environments Watch now Explore multiview video playback in visionOS Watch now Build compelling spatial photo and video experiences Watch now Business & Education Introducing enterprise APIs for visionOS Watch now Whatâs new in device management Watch now Health & Fitness Explore wellbeing APIs in HealthKit Watch now Build custom swimming workouts with WorkoutKit Watch now Get started with HealthKit in visionOS Watch nowToday @ WWDC24: Day 5
Revisit the biggest moments from WWDC24
Explore the highlights.
WWDC24 highlights View now Catch WWDC24 recaps around the worldJoin us for special in-person activities at Apple locations worldwide this summer.
Explore apps and games from the KeynoteCheck out all the incredible featured titles.
Howâd we do?Weâd love to know your thoughts about this yearâs conference.
Todayâs WWDC24 playlist: Power UpGet ready for one last day.
And thatâs a wrap!Thanks for being part of another incredible WWDC. Itâs been a fantastic week of celebrating, connecting, and exploring, and we appreciate the opportunity to share it all with you.
Today @ WWDC24: Day 4
Plan for platforms
Find out whatâs new across Apple platforms.
Design great visionOS apps Watch now Bring your iOS or iPadOS game to visionOS Watch now Design App Intents for system experiences Watch now Explore all platforms sessions GuidesSessions, labs, documentation, and sample code â all in one place.
WWDC24 iOS & iPadOS guide View now WWDC24 Games guide View now WWDC24 visionOS guide View now WWDC24 watchOS guide View now Todayâs WWDC24 playlist: Coffee ShopComfy acoustic sounds for quieter moments.
One more to goWhat a week! But weâre not done yet â weâll be back tomorrow for a big Friday. #WWDC24
Today @ WWDC24: Day 3
All Swift, all day
Explore new Swift and SwiftUI sessions.
Whatâs new in Swift Watch now Whatâs new in SwiftUI Watch now Meet Swift Testing Watch now Explore all Swift sessions GuidesSessions, labs, documentation, and sample code â all in one place.
WWDC24 Swift guide View now WWDC24 Developer Tools guide View now WWDC24 SwiftUI & UI Frameworks guide View now Go further with SwiftConnect with Apple experts and the worldwide developer community.
- Request a consultation in the WWDC labs.
- Explore the Apple Developer Forums.
- Connect with developers all over the world.
Cutting-edge sounds from the global frontiers of jazz.
More to comeThanks for being a part of #WWDC24. Weâll be back tomorrow with even more.
Today @ WWDC24: Day 2
Watch the Platforms State of the Union 5-minute recap
Explore everything announced at WWDC24 >
Introducing Apple IntelligenceGet smarter.
Explore machine learning on Apple platforms Watch now Get started with Writing Tools Watch now Bring your app to Siri Watch now Explore all Machine Learning and AI sessions GuidesSessions, labs, documentation, and sample code â all in one place.
WWDC24 Machine Learning & AI guide View now WWDC24 Design guide View now Go further with Apple Intelligence- Request a consultation in the WWDC labs.
- Explore the Apple Developer Forums.
- Connect with developers all over the world.
Summer sounds to change your latitude.
More tomorrowThanks for being a part of this incredible week. Weâll catch you tomorrow for another big day of technology and creativity. #WWDC24
Find out whatâs new and download beta releases
Discover the latest advancements across Apple platforms, including the all-new Apple Intelligence, that can help you create even more powerful, intuitive, and unique experiences.
To start exploring and building with the latest features, download beta versions of Xcode 16, iOS 18, iPadOS 18, macOS 15, tvOS 18, visionOS 2, and watchOS 11.
Explore new documentation and sample code from WWDC24
Browse new and updated documentation and sample code to learn about the latest technologies, frameworks, and APIs introduced at WWDC24.
WWDC24 Design guide
WWDC24 GUIDE Design
Discover how this yearâs design announcements can help make your app shine on Apple platforms.
Whether youâre refining your design, building for visionOS, or starting from scratch, this yearâs design sessions can take your app to the next level on Apple platforms. Find out what makes a great visionOS app, and learn how to design interactive experiences for the spatial canvas. Dive into creating advanced games for Apple devices, explore the latest SF Symbols, learn how to add personality to your app through writing, and much more.
Get the highlights
Download the design one-sheet.
DownloadVIDEOS
Explore the latest video sessions Design great visionOS apps Watch now Design advanced games for Apple platforms Watch now Create custom environments for your immersive apps in visionOS Watch now Explore game input in visionOS Watch now Design Live Activities for Apple Watch Watch now Whatâs new in SF Symbols 6 Watch now Design interactive experiences for visionOS Watch now Design App Intents for system experiences Watch now Build multilingual-ready apps Watch now Add personality to your app through UX writing Watch now Get started with Dynamic Type Watch now Create custom visual effects with SwiftUI Watch nowFORUMS
Find answers and get adviceAsk questions and get advice about design topics on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of developer activities all over the world during and after WWDC.
RESOURCES
Explore the latest resources- Get the latest Apple Design Resources kits and templates.
- Explore the latest SF Symbols.
- Download the fonts you need to design interfaces for your apps on Apple platforms.
- Find out all thatâs new in the HIG.
- Designing for games: Explore an all-new way to start creating games that feel comfortable and intuitive on Apple platforms.
- Tab bars: iPadOS apps now give people the option to switch between a tab bar or sidebar when navigating their app. Plus, items in the tab bar can now be customized.
- App icons: Learn how people can customize their Home Screens to show dark and tinted icons.
- Controls: Discover how people can quickly and easily perform actions from your app from Control Center, the Lock Screen, and the Action button.
- Widgets: Learn how to tint widgets when a person has customized their Home Screen to show dark and tinted icons.
- Windows: Learn how to use volumes in visionOS to display 2D or 3D content that people can view from any angle.
- Live Activities: Craft Live Activities that look and feel at home in the Smart Stack in watchOS.
- Immersive experiences: Explore the latest guidance on immersion, including design environments and virtual hands.
- Game controls: Learn how to design touch controls for games on iOS and iPadOS.
WWDC24 Swift guide
WWDC24 GUIDE Swift
Your guide to everything new in Swift, related tools, and supporting frameworks.
From expanded support across platforms and community resources, to an optional language mode with an emphasis on data-race safety, this yearâs Swift updates meet you where you are. Explore this yearâs video sessions to discover everything thatâs new in Swift 6, find tools that support migrating to the new language mode at your own pace, learn about new frameworks that support developing with Swift, and much more.
Get the highlights
Download the Swift one-sheet.
DownloadVIDEOS
Explore the latest video sessions Whatâs new in Swift Watch now Whatâs new in SwiftData Watch now Migrate your app to Swift 6 Watch now Go small with Embedded Swift Watch now A Swift Tour: Explore Swiftâs features and design Watch now Create a custom data store with SwiftData Watch now Explore the Swift on Server ecosystem Watch now Explore Swift performance Watch now Consume noncopyable types in Swift Watch now Track model changes with SwiftData history Watch nowFORUMS
Find answers and get adviceFind support from Apple experts and the developer community on the Apple Developer Forums, and check out the Swift Forums on swift.org.
Explore Swift on the Apple Developer Forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into Apple Developer documentation- Discover new and updated Swift documentation
- Explore the Swift Standard Library
- Learn how to migrate your code to Swift 6
- Reference the Swift programming language guide
- Read A Swift Tour: An overview of the features and syntax of Swift
- Explore the new Swift-dedicated GitHub organization
- Learn more about the Swift Package Manager (SwiftPM)
Today @ WWDC24: Day 1
It all starts here
Keynote
The exciting reveal of the latest Apple software and technologies. 10 a.m. PT.
Keynote Watch nowPlatforms State of the Union
The newest advancements on Apple platforms. 1 p.m. PT.
Platforms State of the Union Watch nowWhere to watch
- Apple Developer app and website
- Apple Developer YouTube channel
The full lineup of sessions arrives after the Keynote. And you can start exploring the first batch right after the Platforms State of the Union.
What to do at WWDC24The Keynote is only the beginning. Explore the first day of activities.
- Request your spot in the labs after the Keynote.
- Explore the Apple Developer Forums.
- Connect with developers all over the world.
The Apple Design Awards recognize unique achievements in app and game design â and provide a moment to step back and celebrate the innovations of the Apple developer community.
More to comeThanks for reading and get some rest! Weâll be back tomorrow for a very busy Day 2. #WWDC24
WWDC24 SwiftUI & UI Frameworks guide
WWDC24 GUIDE SwiftUI & UI Frameworks
Design and build your apps like never before.
With enhancements to live previews in Xcode, new customization options for animations and styling, and updates to interoperability with UIKit and AppKit views, SwiftUI is the best way to build apps for Apple platforms. Dive into the latest sessions to discover everything new in SwiftUI, UIKit, AppKit, and more. Make your app stand out with more options for custom visual effects and enhanced animations. And explore sessions that cover the essentials of building apps with SwiftUI.
Get the highlights
Download the SwiftUI one-sheet.
DownloadVIDEOS
Explore the latest video sessions Whatâs new in SwiftUI Watch now Whatâs new in AppKit Watch now Whatâs new in UIKit Watch now SwiftUI essentials Watch now Whatâs new in watchOS 11 Watch now Swift Charts: Vectorized and function plots Watch now Elevate your tab and sidebar experience in iPadOS Watch now Bring expression to your app with Genmoji Watch now Squeeze the most out of Apple Pencil Watch now Catch up on accessibility in SwiftUI Watch now Migrate your TVML app to SwiftUI Watch now Get started with Writing Tools Watch now Dive deep into volumes and immersive spaces Watch now Work with windows in SwiftUI Watch now Enhance your UI animations and transitions Watch now Evolve your document launch experience Watch now Build multilingual-ready apps Watch now Create custom hover effects in visionOS Watch now Tailor macOS windows with SwiftUI Watch now Demystify SwiftUI containers Watch now Support semantic search with Core Spotlight Watch now Create custom visual effects with SwiftUI Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about SwiftUI & UI frameworks
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentation- Level up the accessibility of your SwiftUI app.
- Interact with nearby points of interest.
- Build a document-based app with SwiftUI.
- Customize window styles and state-restoration behavior in macOS.
- Enhance your appâs content with tab navigation.
- Create visual effects with SwiftUI.
- Unify your appâs animations.
- Find all of this yearâs SwiftUI, AppKit, and UIKit updates.
- Explore updates in the Human Interface Guidelines (HIG).
WWDC24 Developer Tools guide
WWDC24 GUIDE Developer Tools
Explore a wave of updates to developer tools that make building apps and games easier and more efficient than ever.
Watch the latest video sessions to explore a redesigned code completion experience in Xcode 16, and say hello to Swift Assist â a companion for all your coding tasks. Level up your code with the help of Swift Testing, the new, easy-to-learn framework that leverages Swift features to help enhance your testing experience. Dive deep into debugging, updates to Xcode Cloud, and more.
Get the highlights
Download the developer tools one-sheet.
DownloadVIDEOS
Explore the latest video sessions Meet Swift Testing Watch now Whatâs new in Xcode 16 Watch now Go further with Swift Testing Watch now Xcode essentials Watch now Run, Break, Inspect: Explore effective debugging in LLDB Watch now Break into the RealityKit debugger Watch now Demystify explicitly built modules Watch now Extend your Xcode Cloud workflows Watch now Analyze heap memory Watch nowFORUMS
Find answers and get adviceFind support from Apple experts and the developer community on the Apple Developer Forums.
Explore developer tools on the forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentationExpand your tool belt with new and updated articles and documentation.
- Explore updates in Xcode 16
- Discover Swift Testing
- Migrate a test from XCTest
- Define test functions
- Organize test functions with suite types
- Implement parameterized tests
- Enable and disable tests
- Limit the running time of tests
- Add tags to tests
- Add comments to tests
- Associate bugs with tests
- Interpret bug identifiers
WWDC24 iOS & iPadOS guide
WWDC24 GUIDE iOS & iPadOS
Your guide to all the new features and tools for building apps for iPhone and iPad.
Learn how to create more customized and intelligent apps that appear in more places across the system with the latest Apple technologies. And with Apple Intelligence, you can bring personal intelligence into your apps to deliver new capabilities â all with great performance and built-in privacy. Explore new video sessions about controls, Live Activities, App Intents, and more.
Get the highlights
Download the iOS & iPadOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Bring your app to Siri Watch now Discover RealityKit APIs for iOS, macOS, and visionOS Watch now Explore machine learning on Apple platforms Watch now Elevate your tab and sidebar experience in iPadOS Watch now Extend your appâs controls across the system Watch now Streamline sign-in with passkey upgrades and credential managers Watch now Whatâs new in App Intents Watch now Squeeze the most out of Apple Pencil Watch now Meet FinanceKit Watch now Bring your iOS or iPadOS game to visionOS Watch now Build a great Lock Screen camera capture experience Watch now Design App Intents for system experiences Watch now Bring your appâs core features to users with App Intents Watch now Broadcast updates to your Live Activities Watch now Unlock the power of places with MapKit Watch now Implement App Store Offers Watch now Whatâs new in Wallet and Apple Pay Watch now Meet the Contact Access Button Watch now Whatâs new in device management Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code Dive into documentation- Discover WidgetKit for controls.
- Find out how to set up broadcast push notifications, send channel management requests to APNs, and send broadcast push notification requests to APNs.
- Check out the new LockedCameraCapture, Media Accessibility, AccessorySetupKit, and Contact Provider frameworks.
- Explore object tracking with ARKit.
- Learn how to elevate your iPad app with the tab sidebar, as well as with a floating tab bar and integrated sidebar, using SwiftUI or UIkit.
WWDC24 Machine Learning & AI guide
WWDC24 GUIDE Machine Learning & AI
Bring personal intelligence to your apps.
Apple Intelligence brings powerful, intuitive, and integrated personal intelligence to Apple platforms â designed with privacy from the ground up. And enhancements to our machine learning frameworks let you run and train your machine learning and artificial intelligence models on Apple devices like never before.
Get the highlights
Download the Machine Learning & AI one-sheet.
DownloadVIDEOS
Explore the latest video sessionsGet the most out of Apple Intelligence by diving into sessions that cover updates to Siri integration and App Intents, and how to support Writing Tools and Genmoji in your app. And learn how to bring machine learning and AI directly into your apps using our machine learning frameworks.
Explore machine learning on Apple platforms Watch now Bring your app to Siri Watch now Bring your appâs core features to users with App Intents Watch now Bring your machine learning and AI models to Apple silicon Watch now Get started with Writing Tools Watch now Deploy machine learning and AI models on-device with Core ML Watch now Support real-time ML inference on the CPU Watch now Bring expression to your app with Genmoji Watch now Whatâs new in App Intents Watch now Whatâs new in Create ML Watch now Design App Intents for system experiences Watch now Discover Swift enhancements in the Vision framework Watch now Meet the Translation API Watch now Accelerate machine learning with Metal Watch now Train your machine learning and AI models on Apple GPUs Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
Dive into Machine learning and AI on the forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentation- Build a search interface for your app.
- Bring Writing Tools to your app with
UITextView
for UIKit andNSTextView
for AppKit. - Add expression to your app with Genmoji using
NSAdaptiveImageGlyph
in UIKit and AppKit. - Integrate machine learning models into your app using Core ML.
- Create your own machine learning models using Create ML.
- Discover all of the latest Core ML updates.
- Find out whatâs new in the Vision framework.
WWDC24 Games guide
WWDC24 GUIDE Games
Create the next generation of games for millions of players worldwide.
Learn how to create cutting-edge gaming experiences across a unified gaming platform built with tightly integrated graphics software and a scalable hardware architecture. Explore new video sessions about gaming in visionOS, game input, the Game Porting Toolkit 2, and more.
Get the highlights
Download the games one-sheet.
DownloadVIDEOS
Explore the latest video sessions Render Metal with passthrough in visionOS Watch now Meet TabletopKit for visionOS Watch now Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Explore game input in visionOS Watch now Bring your iOS or iPadOS game to visionOS Watch now Accelerate machine learning with Metal Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code Dive into documentation- Check out updated design guidance for games.
- Easily bring your game to Apple platforms using the Game Porting Toolkit 2.
- Meet the new TabletopKit framework.
- Learn how to play sound from a location in a 3D scene.
- Learn how to manage your game window for Metal in macOS.
- Get details on adapting your game interface for smaller screens.
- Discover how to improve your gameâs graphics performance and settings.
- Find out how to improve the player experience for games with large downloads.
- Explore adding touch controls to games that support game controllers.
WWDC24 watchOS guide
WWDC24 GUIDE watchOS
Your guide to all the new features and tools for building apps for Apple Watch.
Learn how to take advantage of the increased intelligence and capabilities of the Smart Stack. Explore new video sessions about relevancy cues, interactivity, Live Activities, and double tap.
Get the highlights
Download the watchOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Whatâs new in watchOS 11 Watch now Bring your Live Activity to Apple Watch Watch now Whatâs new in SwiftUI Watch now SwiftUI essentials Watch now Design Live Activities for Apple Watch Watch now Catch up on accessibility in SwiftUI Watch now Build custom swimming workouts with WorkoutKit Watch now Demystify SwiftUI containers Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about watchOS
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentation- Discover double tap.
- Learn how to use the latest technologies to build apps for Apple Watch.
- Get updated guidance on design for Apple Watch.
- Visit the Apple Watch site.
WWDC24 sessions schedule, lab requests, guides, and documentation now available
WWDC24 is here! Hereâs how to make the most of your week:
- Watch daily sessions.
- Request one-on-one online lab appointments with Apple experts.
- Check out curated guides to the weekâs biggest announcements.
- Dive into new and updated documentation.
WWDC24 visionOS guide
WWDC24 GUIDE visionOS
The infinite canvas is waiting for you.
In this yearâs sessions, youâll get an overview of great visionOS app design, explore object tracking, and discover new RealityKit APIs. Youâll also find out how to build compelling spatial photo and video experiences, explore enterprise APIs for visionOS, find out how to render Metal with passthrough, and much more.
Get the highlights
Download the visionOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Design great visionOS apps Watch now Explore object tracking for visionOS Watch now Compose interactive 3D content in Reality Composer Pro Watch now Discover RealityKit APIs for iOS, macOS, and visionOS Watch now Create enhanced spatial computing experiences with ARKit Watch now Enhance your spatial computing app with RealityKit audio Watch now Build compelling spatial photo and video experiences Watch now Meet TabletopKit for visionOS Watch now Render Metal with passthrough in visionOS Watch now Explore multiview video playback in visionOS Watch now Introducing enterprise APIs for visionOS Watch now Dive deep into volumes and immersive spaces Watch now Build a spatial drawing app with RealityKit Watch now Optimize for the spatial web Watch now Explore game input in visionOS Watch now Create custom environments for your immersive apps in visionOS Watch now Enhance the immersion of media viewing in custom environments Watch now Design interactive experiences for visionOS Watch now Create custom hover effects in visionOS Watch now Optimize your 3D assets for spatial computing Watch now Discover area mode for Object Capture Watch now Bring your iOS or iPadOS game to visionOS Watch now Build immersive web experiences with WebXR Watch now Get started with HealthKit in visionOS Watch now Whatâs new in Quick Look for visionOS Watch now Whatâs new in USD and MaterialX Watch now Customize spatial Persona templates in SharePlay Watch now Create enhanced spatial computing experiences with ARKit Watch now Break into the RealityKit debugger Watch now Whatâs new in SwiftUI Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about visionOS
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code- BOT-anist: Discover how the RealityKit debugger lets you inspect the entity hierarchy of spatial apps, debug rogue transformation, detect bad behavior, and find missing entities.
- Destination Video: Leverage 3D video and Spatial Audio to deliver an immersive experience.
- Incorporating real-world surroundings in an immersive experience: Make your appâs content respond to the local shape of the world.
- Simulating particles in your visionOS app: Add a range of visual effects to a RealityKit view by attaching a particle emitter component to an entity.
- Simulating physics with collisions in your visionOS app: Create entities that behave and react like physical objects in a RealityKit view.
- Discover new visionOS content in the HIG.
- Creating your first visionOS app: Learn new tips for building a new visionOS app using SwiftUI and platform-specific features.
- Adding 3D content to your app: Explore the latest in adding depth and dimension to your visionOS app.
- Understanding RealityKitâs modular architecture: Learn how everything fits together in RealityKit.
- Designing RealityKit content with Reality Composer Pro: Discover updates that can help you quickly create RealityKit scenes for your visionOS app.
- Presenting windows and spaces: Find out how to open and close the scenes that make up your appâs interface.
Updated agreements and guidelines now available
The App Review Guidelines, Apple Developer Program License Agreement, and Apple Developer Agreement have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms as needed.
App Review Guidelines- 2.1(a): Added to Notarization.
- 2.1(b): Added requirement to explain why configured in-app items cannot be found or reviewed in your app to your review notes.
- 2.5.8: We will no longer reject apps that simulate multi-app widget experiences.
- 4.6: This guideline has been removed.
- Sections 1, 6(B): Updated âApple IDâ to âApple Account.â
- Section 16(A): Clarified export compliance requirements.
- Section 18: Updated terminology for government end users.
- Definitions, Section 2.1, 3.3.6(C), 3.3.10(A), 14.2(C), Attachment 9, Schedules 1-3: Updated âApple IDâ to âApple Account.â
- Definitions: Clarified definition of Apple Maps Service.
- Definitions, Section 3.3.6(F): Specified requirements for using the Apple Music Feed API.
- Definitions, Section 3.3.8(F): Added terms for use of the Now Playing API.
- Section 3.2(h): Added terms for use of Apple Software and Services.
- Section 6.5: Added terms for use of TestFlight.
- Section 7.7: Added terms on customization of icons.
- Section 11.2(f), 14.8(A): Clarified export compliance requirements.
- Section 14.9: Updated terminology for government end users.
- Attachment 5, Section 3.1: Added terms for use of Wallet pass templates.
Please sign in to your account to review and accept the updated terms.
View all agreements and guidelines
Translations of the terms will be available on the Apple Developer website within one month.
Hello Developer: June 2024
With WWDC24 just days away, thereâs a lot of ground to cover, so letâs get right to it.
WWDC24
Introducing the 2024 Apple Design Award winnersInnovation. Ingenuity. Inspiration.
WWDC24: Everything you need to knowFrom the Keynote to the last session drop, here are the details for an incredible week of sessions, labs, community activities, and more.
Download the Apple Developer app >
Subscribe to Apple Developer on YouTube >
Watch the KeynoteDonât miss the exciting reveal of the latest Apple software and technologies at 10 a.m. PT on Monday, June 10.
Watch the Platforms State of the UnionHereâs your deep dive into the newest advancements on Apple platforms. Join us at 1 p.m. PT on Monday, June 10.
Get ready for sessionsLearn something new in video sessions posted to the Apple Developer app, website, and YouTube channel. The full schedule drops after the Keynote on Monday, June 10.
Prepare for labsHereâs everything you need to know to get ready for online labs.
Find answers on the forumsDiscuss the conferenceâs biggest moments on the Apple Developer Forums.
Get the most out of the forums >
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
Explore community activities >
Say hello to the first WWDC24 playlistThe official WWDC24 playlists drop right after the Keynote. Until then, hereâs a teaser playlist to get you excited for the week.
Coming up: One incredible weekHave a great weekend, and weâll catch you on Monday. #WWDC24
Watch the WWDC24 Keynote
WWDC24
Tune in at 10 a.m. PT on June 10 to catch the exciting reveal of the latest Apple software and technologies.
Keynote Watch now Keynote (ASL) Watch nowWatch the WWDC24 Platforms State of the Union
WWDC24
Tune in at 1 p.m. PT on June 10 to dive deep into the newest advancements on Apple platforms.
Platforms State of the Union Watch now Platforms State of the Union (ASL) Watch nowPrice and tax updates for apps, In-App Purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Price updatesOn June 21, pricing for apps and In-App PurchasesÂč will be updated for the Egypt, Ivory Coast, Nepal, Nigeria, Suriname, and Zambia storefronts if you havenât selected one of these as the base for your app or InâApp Purchase.Âč These updates also consider the following valueâadded tax (VAT) changes:
- Ivory Coast: VAT introduction of 18%
- Nepal: VAT introduction of 13% and digital services tax of 2%
- Suriname: VAT introduction of 10%
- Zambia: VAT introduction of 16%
Prices wonât change on the Egypt, Ivory Coast, Nepal, Nigeria, Suriname, or Zambia storefront if youâve selected that storefront as the base for your app or In-App Purchase.Âč Prices on other storefronts will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your InâApp Purchase is an autoârenewable subscription and wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, InâApp Purchases, and autoârenewable subscriptions at any time.
Learn more about managing your pricesView or edit upcoming price changes
Edit your appâs base country or region
Pricing and availability start times by region
Set a price for an In-App Purchase
Tax updatesYour proceeds for sales of apps and In-App Purchases will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Ivory Coast, Nepal, Suriname, and Zambia.
As of today, June 6, your proceeds from the sale of eligible apps and InâApp Purchases have been modified in the following countries to reflect introductions of or changes in tax rates.
- France: Digital services tax no longer applicable
- Ivory Coast: VAT introduction of 18%
- Malaysia: Sales and Service Tax (SST) increased to 8% from 6%
- Nepal: VAT introduction of 13% and digital services tax introduction of 2%
- Norway: VAT increased to 20% from 0% for certain Norwegian news publications
- Suriname: VAT introduction of 10%
- Uganda: Digital services tax introduction of 5%
- Zambia: VAT introduction of 16%
The Fitness and Health category has a new attribute: âContent is primarily accessed through streamingâ. If this is relevant to your apps or In-App Purchases that offer fitness video streaming, review and update your selections in the Pricing and Availability section of Apps in App Store Connect.
Learn about setting tax categories
1: Excludes auto-renewable subscriptions.
Introducing the 2024 Apple Design Award winners
Every year, the Apple Design Awards recognize innovation, ingenuity, and technical achievement in app and game design.
The incredible developers behind this yearâs finalists have shown what can be possible on Apple platforms â and helped lay the foundation for whatâs to come.
Weâre thrilled to present the winners of the 2024 Apple Design Awards.
Action packed.
One week to go. Donât miss the exciting reveal of the latest Apple software and technologies.
Keynote kicks off at 10 a.m. PT on June 10.
Join us for the Platforms State of the Union at 1 p.m. PT on June 10.
Introducing the 2024 Apple Design Award finalists
Every year, the Apple Design Awards recognize innovation, ingenuity, and technical achievement in app and game design.
But theyâve also become something more: A moment to step back and celebrate the Apple developer community in all its many forms.
Coming in swiftly.
Join the worldwide developer community for an incredible week of technology and creativity â all online and free. WWDC24 takes place from June 10-14.
Check out the new Apple Developer Forums
The Apple Developer Forums have been redesigned for WWDC24 to help developers connect with Apple experts, engineers, and each other to find answers and get advice.
Apple Developer Relations and Apple engineering are joining forces to field your questions and work to solve your technical issues. Youâll have access to an expanded knowledge base and enjoy quick response times â so you can get back to creating and enhancing your app or game. Plus, Apple Developer Program members now have priority access to expert advice on the forums.
Hello Developer: May 2024
It wonât be long now! WWDC24 takes place online from June 10 through 14, and weâre here to help you get ready for the biggest developer event of the year. In this edition:
- Explore Pathways, a brand-new way to learn about developing for Apple platforms.
- Meet three Distinguished Winners of this yearâs Swift Student Challenge.
- Get great tips from the SharePlay team.
- Browse new developer activities about accessibility, machine learning, and more.
WWDC24
Introducing PathwaysIf youâre new to developing for Apple platforms, weâve got an exciting announcement. Pathways are simple and easy-to-navigate collections of the videos, documentation, and resources youâll need to start building great apps and games. Because Pathways are self-directed and can be followed at your own pace, theyâre the perfect place to begin your journey.
Explore Pathways for Swift, SwiftUI, design, games, visionOS, App Store distribution, and getting started as an Apple developer.
Meet three Distinguished Winners of the Swift Student ChallengeElena Galluzzo, Dezmond Blair, and Jawaher Shaman all drew inspiration from their families to create their winning app playgrounds. Now, they share the hope that their apps can make an impact on others as well.
Meet Elena, Dezmond, and Jawaher >
MEET WITH APPLE EXPERTS
Check out the latest worldwide developer activities- Meet with App Review online to discuss the App Review Guidelines and explore best practices for a smooth review process. Sign up for May 14.
- Join us in Bengaluru for a special in-person activity to commemorate Global Accessibility Awareness Day. Sign up for May 15.
- Learn how Apple machine learning frameworks can help you create more intelligent apps and games in an online activity. Sign up for May 19.
Browse the full schedule of activities >
NEWS
Explore Apple Pencil ProBring even richer and more immersive interactions to your iPad app with new features, like squeeze gestures, haptic feedback, and barrel-roll angle tracking.
BEHIND THE DESIGN
The rise of Tide GuideHereâs the swell story of how fishing with his grandfather got Tucker MacDonald hooked into creating his tide-predicting app.
âI taught myselfâ: Tucker MacDonald and the rise of Tide Guide View nowGROW YOUR BUSINESS
Explore simple, safe transactions with In-App PurchaseTake advantage of powerful global pricing tools, promotional features, analytics only available from Apple, built-in customer support, and fraud detection.
Q&A
Get shared insights from the SharePlay teamLearn about shared experiences, spatial Personas, that magic âshockwaveâ effect, and more.
Q&A with the SharePlay team View nowDOCUMENTATION
Browse new and updated docs- Explore the new framework for converting Pages, Numbers, and Keynote files to PDF, enabling you to show an inline preview in a web browser.
- Check out Writing ARM64 code for Apple platforms for an update on data-independent timing.
- Visit the HIG for new and enhanced guidance on virtual hands and interactive elements in visionOS, sheets in iPadOS, and more.
Want to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Q&A with the SharePlay team
SharePlay is all about creating meaningful shared experiences in your app. By taking advantage of SharePlay, your app can provide a real-time connection that synchronizes everything from media playback to 3D models to collaborative tools across iPhone, iPad, Mac, Apple TV, and Apple Vision Pro. We caught up with the SharePlay team to ask about creating great SharePlay experiences, spatial Personas, that magic âshockwaveâ effect, and more.
How does a person start a SharePlay experience?Anyone can begin a group activity by starting a FaceTime call and then launching a SharePlay-supported app. When they do, a notification about the group activity will appear on all participantsâ screens. From there, participants can join â and come and go â as they like. You can also start a group activity from your app, from the share sheet, or by adding a SharePlay button to your app.
How can I use SharePlay to keep media playback in sync?SharePlay supports coordinated media playback using AVKit. You can use the system coordinator to synchronize your own player across multiple participants. If you have an ad-supported app, you can synchronize both playback and ad breaks. SharePlay also provides the GroupSessionMessenger API, which lets participants communicate in near-real time.
Whatâs the difference between SharePlay and Shared with You? Can they work together?SharePlay allows people to share rich experiences with each other. Shared with You helps make app content that people are sharing in Messages available to your app. For example, if a group chat is discussing a funny meme video from your app, adopting Shared with You would allow your app to highlight that content in the app. And if your app supports SharePlay, you can surface that relevant content as an option for watching together.
Separately, Shared with You offers ways to initiate collaboration on shared, persisted content (such as documents) over Messages and FaceTime. You can choose to support SharePlay on that collaborative content, but if you do, consider the ephemerality of a SharePlay experience compared to the persistence of collaboration. For example, if your document is a presentation, you may wish to leverage Shared with You to get editors into the space while using SharePlay to launch an interactive presentation mode that just isnât possible with screen sharing alone.
Whatâs the easiest way for people to share content?When your app lets your system know that your current view has shareable content on screen, people who bring their devices together can seamlessly share that content â much like NameDrop, which presents a brief âshockwaveâ animation when they do. This method supports the discrete actions of sharing documents, initiating SharePlay, and starting a collaboration. This can also connect your content to the system share sheet and help you expose shareable content to the Share menu in visionOS.
Can someone on iPhone join a SharePlay session with someone on Apple Vision Pro?Yes! SharePlay is supported across iOS, iPadOS, macOS, tvOS, and visionOS. That means people can watch a show together on Apple TV+ and keep their playback synchronized across all platforms. To support a similar playback situation in your app, watch Coordinate media playback in Safari with Group Activities. If youâre looking to maintain your appâs visual consistency across platforms, check out the Group Session Messenger and DrawTogether sample project. Remember: SharePlay keeps things synchronized, but your UI is up to you.
How do I get started adopting spatial Personas with SharePlay in visionOS?When you add Group Activities to your app, people can share in that activity over FaceTime while appearing windowed â essentially the same SharePlay experience theyâd see on other platforms. In visionOS, you have the ability to create a shared spatial experience using spatial Personas in which participants are placed according to a template. For example:
Using spatial Personas, the environment is kept consistent and participants can see each othersâ facial expressions in real time.
How do I maintain visual and spatial consistency with all participants in visionOS?FaceTime in visionOS provides a shared spatial context by placing spatial Personas in a consistent way around your app. This is what we refer to as âvisual consistency.â You can use SharePlay to maintain the same content in your app for all participants.
Can both a window and a volume be shared at the same time in a SharePlay session?No. Only one window or volume can be associated with a SharePlay session, but you can help the system choose the proper window or volume.
How many people can participate in a group activity?SharePlay supports 33 total participants, including yourself. Group activities on visionOS involving spatial Personas support five participants at a time.
Do iOS and iPadOS apps that are compatible with visionOS also support SharePlay in visionOS?Yes. During a FaceTime call, your app will appear in a window, and participants in the FaceTime call will appear next to it.
Learn more about SharePlay Design spatial SharePlay experiences Watch now Build spatial SharePlay experiences Watch now Share files with SharePlay Watch now Add SharePlay to your app Watch nowâI taught myselfâ: Tucker MacDonald and the rise of Tide Guide
Lots of apps have great origin stories, but the tale of Tucker MacDonald and Tide Guide seems tailor-made for the Hollywood treatment. It begins in the dawn hours on Cape Cod, where a school-age MacDonald first learned to fish with his grandfather.
âEvery day, heâd look in the paper for the tide tables,â says MacDonald. âThen heâd call me up and say, âAlright Tucker, weâve got a good tide and good weather. Letâs be at the dock by 5:30 a.m.ââ
Rhapsody in blue: Tide Guide delivers Washington weather data in a gorgeous design and color scheme.
That was MacDonaldâs first introduction to tides â and the spark behind Tide Guide, which delivers comprehensive forecasts through top-notch data visualizations, an impressive array of widgets, an expanded iPad layout, and Live Activities that look especially great in, appropriately enough, the Dynamic Island. The SwiftUI-built app also offers beautiful Apple Watch complications and a UI that can be easily customized, depending how deep you want to dive into its data. Itâs a remarkable blend of original design and framework standards, perfect for plotting optimal times for a boat launch, research project, or picnic on the beach.
Impressively, Tide Guide was named a 2023 Apple Design Award finalist â no mean feat for a solo developer who had zero previous app-building experience and started his career as a freelance filmmaker.
âI wanted to be a Hollywood director since I was in the fifth grade,â says MacDonald. Early in his filmmaking career, MacDonald found himself in need of a tool that could help him pre-visualize different camera and lens combinations â âlike a directorâs viewfinder app,â he says. And while he caught a few decent options on the market, MacDonald wanted an app with iOS design language that felt more at home on his iPhone. âSo I dove in, watched videos, and taught myself how to make it,â he says.
My primary use cases were going fishing, heading to the beach, or trying to catch a sunset.
Tucker MacDonald, Tide Guide
Before too long, MacDonald drifted away from filmmaking and into development, taking a job as a UI designer for a social app. âThe app ended up failing, but the job taught me how a designer works with an engineer,â he says. âI also learned a lot about design best practices, because I had been creating apps that used crazy elements, non-standard navigation, stuff like that.â
Tucker MacDonald grew up fishing with his grandfather in the waters off Cape Cod.
Armed with growing design knowledge, he started thinking about those mornings with his grandfather, and how he might create something that could speed up the crucial process of finding optimal fishing conditions. And it didnât need to be rocket science. âMy primary use cases were going fishing, heading to the beach, or trying to catch a sunset,â he says. âI just needed to show current conditions.â
Iâd say my designs were way prettier than the code I wrote.
Tucker MacDonald, Tide Guide
In the following years, Tide Guide grew in parallel with MacDonaldâs self-taught skill set. âThere was a lot of trial and error, and Iâd say my designs were way prettier than the code I wrote,â he laughs. âBut I learned both coding and design by reading documentation and asking questions in the developer community.â
Todayâs Tide Guide is quite the upgrade from that initial version. MacDonald continues to target anyone heading to the ocean but includes powerful metrics â like an hour-by-hour 10-day forecast, water temperatures, and swell height â that advanced users can seek out as needed. The appâs palette is even designed to match the color of the sky throughout the day. âThe more time you spend with it, the more you can dig into different layers,â he says.
All the information you need for a day on the water, in one place.
People around the world have dug into those layers, including an Alaskan tour company operator who can only land in a remote area when the tide is right, and a nonprofit national rescue service in Scotland, whose members weighed in with a Siri shortcut-related workflow request that MacDonald promptly included. And as Tide Guide gets bigger, MacDonaldâs knowledge of developing â and oceanography â continues to swell. âIâm just happy that my passion for crafting an incredible experience comes through,â he says, âbecause I really do have so much fun making it.â
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Whatâs new for apps distributed in the European Union
Core Technology Fee (CTF)
The CTF is an element of the alternative business terms in the EU that reflects the value Apple provides developers through tools, technologies, and services that enable them to build and share innovative apps. We believe anyone with a good idea and the ingenuity to bring it to life should have the opportunity to offer their app to the world. Only developers who reach significant scale (more than one million first annual installs per year in the EU) pay the CTF. Nonprofit organizations, government entities, and educational institutions approved for a fee waiver donât pay the CTF. Today, weâre introducing two additional conditions in which the CTF is not required:
- First, no CTF is required if a developer has no revenue whatsoever. This includes creating a free app without monetization that is not related to revenue of any kind (physical, digital, advertising, or otherwise). This condition is intended to give students, hobbyists, and other non-commercial developers an opportunity to create a popular app without paying the CTF.
- Second, small developers (less than âŹ10 million in global annual business revenue*) that adopt the alternative business terms receive a 3-year free on-ramp to the CTF to help them create innovative apps and rapidly grow their business. Within this 3-year period, if a small developer that hasnât previously exceeded one million first annual installs crosses the threshold for the first time, they wonât pay the CTF, even if they continue to exceed one million first annual installs during that time. If a small developer grows to earn global revenue between âŹ10 million and âŹ50 million within the 3-year on-ramp period, theyâll start to pay the CTF after one million first annual installs up to a cap of âŹ1 million per year.
This week, the European Commission designated iPadOS a gatekeeper platform under the Digital Markets Act. Apple will bring our recent iOS changes for apps in the European Union (EU) to iPadOS later this fall, as required. Developers can choose to adopt the Alternative Terms Addendum for Apps in the EU that will include these additional capabilities and options on iPadOS, or stay on Appleâs existing terms.
Once these changes are publicly available to users in the EU, the CTF will also apply to iPadOS apps downloaded through the App Store, Web Distribution, and/or alternative marketplaces. Users who install the same app on both iOS and iPadOS within a 12-month period will only generate one first annual install for that app. To help developers estimate any potential impact on their app businesses under the Alternative Terms Addendum for Apps in the EU, weâve updated the App Install reports in App Store Connect that can be used with our fee calculator.
For more details, visit Understanding the Core Technology Fee for iOS apps in the European Union. If youâve already entered into the Alternative Terms Addendum for Apps in the EU, be sure to sign the updated terms.
Global business revenue takes into account revenue across all commercial activity, including from associated corporate entities. For additional details, read the Alternative Terms Addendum for Apps in the EU.
Reminder: Privacy requirement for app submissions starts May 1
The App Store was created to be a safe place for users to discover and get millions of apps all around the world. Over the years, weâve built many critical privacy and security features that help protect users and give them transparency and control â from Privacy Nutrition Labels to app tracking transparency, and so many more.
An essential requirement of maintaining user trust is that developers are responsible for all of the code in their apps, including code frameworks and libraries from other sources. Thatâs why weâve created privacy manifests and signature requirements for the most popular third-party SDKs, as well as required reasons for covered APIs.
Starting May 1, 2024, new or updated apps that have a newly added third-party SDK thatâs on the list of commonly used third-party SDKs will need all of the following to be submitted in App Store Connect:
- Required reasons for each listed API
- Privacy manifests
- Valid signatures when the SDK is added as a binary dependency
Apps wonât be accepted if they fail to meet the manifest and signature requirements. Apps also wonât be accepted if all of the following apply:
- Theyâre missing a reason for a listed API
- The code is part of a dynamic framework embedded via the Embed Frameworks build phase
- The framework is a newly added third-party SDK thatâs on the list of commonly used third-party SDKs
In the future, these required reason requirements will expand to include the entire app binary. If youâre not using an API for an approved reason, please find an alternative. These changes are designed to help you better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users.
This is a step forward for all apps and we encourage all SDKs to adopt this functionality to better support the apps that depend on them.
Q&A: Promoting your app or game with Apple Search Ads
Apple Search Ads helps you drive discovery of your app or game on the App Store. We caught up with the Apple Search Ads team to learn more about successfully using the service, including signing up for the free online Apple Search Ads Certification course.
How might my app or game benefit from promotion on the App Store?With Apple Search Ads, developers are seeing an increase in downloads, retention, return on ad spend, and more. Find out how the developers behind The Chefz, Tiket, and Petit BamBou have put the service into practice.
Where will my ad appear?You can reach people in the following places:
- The Today tab, where people start their App Store visit.
- The Search tab, before people search for something specific.
- Search results, at the top of the results list.
- Product pages, in the âYou Might Also Likeâ section.
Online Apple Search Ads Certification training teaches proven best practices for driving stronger campaign performance. Certification training is designed for all skill levels, from marketing pros to those just starting out. To become certified, complete all of the Certification lessons (each takes between 10 and 20 minutes), then test your skills with a free exam. Once youâre certified, you can share your certificate with your professional network on platforms like LinkedIn.
Sign up here with your Apple ID.
Will my certification expire?Although your Apple Search Ads certification never expires, training is regularly updated. You can choose to be notified about these updates through email or web push notifications.
Can I highlight specific content or features in my ads?You can use the custom product pages you create in App Store Connect to tailor your ads for a specific audience, feature launch, seasonal promotion, and more. For instance, you can create an ad for the Today tab that leads people to a specific custom product page or create ad variations for different search queries. Certification includes a lesson on how to do so.
Can I advertise my app before launch?You can use Apple Search Ads to create ads for apps youâve made available for pre-order. People can order your app before itâs released, and itâll automatically download onto their devices on release day.
Apple Search Ads now available in Brazil and more Latin American markets
Drive discovery and downloads on the App Store with Apple Search Ads in 70 countries and regions, now including Brazil, Bolivia, Costa Rica, the Dominican Republic, El Salvador, Guatemala, Honduras, Panama, and Paraguay.
Visit the Apple Search Ads site and Q&A.
And explore best practices to improve your campaign performance with the free Apple Search Ads Certification course.
Let loose.
Watch the May 7 event at apple.com, on Apple TV, or on YouTube Live.
Check out our newest developer activities
Join us around the world to learn about growing your business, elevating your app design, and preparing for the App Review process. Hereâs a sample of our new activities â and you can always browse the full schedule to find more.
- Expand your app to new markets: Learn how to bring your apps and games to Southeast Asia, Hong Kong, and Taiwan in new online sessions with App Store experts.
- Request a one-on-one App Review consultation: Meet online to discuss the App Review Guidelines and explore best practices for a smooth review process.
- Visit the Apple Vision Pro developer labs: Test, refine, and optimize your apps and games for the infinite canvas â with in-person help from Apple.
- Request a design or technology consultation: In this 30-minute online consultation, youâll get expert advice tailored to your app or game.
Web Distribution now available in iOS 17.5 beta 2 and App Store Connect
Web Distribution lets authorized developers distribute their iOS apps to users in the European Union (EU) directly from a website owned by the developer. Apple will provide developers access to APIs that facilitate the distribution of their apps from the web, integrate with system functionality, and back up and restore usersâ apps, once they meet certain requirements designed to help protect users and platform integrity. For details, visit Getting started with Web Distribution in the EU.
Get ready with the latest beta releases
The beta versions of iOS 17.5, iPadOS 17.5, macOS 14.5, tvOS 17.5, visionOS 1.2, and watchOS 10.5 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.3.
Updated App Review Guidelines now available
The App Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification. The following guidelines have been updated:
- 3.1.1(a): Updated to include Music Streaming Services Entitlements.
- 4.7: Added games from retro game console emulator apps to the list of permitted software, and clarifies that mini apps and mini games must be HTML5.
Hello Developer: April 2024
Welcome to Hello Developer â and the kickoff to WWDC season. In this edition:
- Discover whatâs ahead at WWDC24 â and check out the new Apple Developer YouTube channel.
- Learn how the all-new Develop in Swift Tutorials can help jump-start a career in app development.
- Find out how Zach Gage and Jack Schlesinger rebooted the crossword puzzle with Knotwords.
WWDC24
The countdown is onWWDC season is officially here.
This yearâs Worldwide Developers Conference takes place online from June 10 through 14, offering you the chance to explore the new tools, frameworks, and technologies thatâll help you create your best apps and games yet.
All week long, you can learn and refine new skills through video sessions, meet with Apple experts to advance your projects and ideas, and join the developer community for fun activities. Itâs an innovative week of technology and creativity â all online at no cost.
And for the first time, WWDC video sessions will be available on YouTube, in addition to the Apple Developer app and website. Visit the new Apple Developer channel to subscribe and catch up on select sessions.
TUTORIALS
Check out the new Develop in Swift TutorialsKnow a student or aspiring developer looking to start their coding journey? Visit the all-new Develop in Swift Tutorials, designed to introduce Swift, SwiftUI, and spatial computing through the experience of building a project in Xcode.
BEHIND THE DESIGN
Gage and Schlesinger at the crossroadsLearn how acclaimed game designers Zach Gage and Jack Schlesinger reimagined the crossword with Knotwords.
Knotwords: Gage and Schlesinger at the crossroads View nowMEET WITH APPLE EXPERTS
Browse new developer activitiesCheck out this monthâs sessions, labs, and consultations, held online and in person around the world.
NEWS AND DOCUMENTATION
Explore and create with new and updated docs- Check out two new sample code projects about creating and viewing stereo MV-HEVC movies: Converting side-by-side 3D video to multiview HEVC and Reading multiview 3D video files.
- Find out about creating distribution-signed code for macOS, and explore the details of packaging Mac software for distribution.
- Learn whatâs new in the Human Interface Guidelines, including guidance on displaying virtual hands, organizing your spatial layouts, and using Activity rings in your app.
View the complete list of new resources.
Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Knotwords: Gage and Schlesinger at the crossroads
Knotwords is a clever twist on crossword puzzles â so much so that one would expect creators Zach Gage and Jack Schlesinger to be longtime crossword masters who set out to build themselves a new challenge.
One would be totally wrong.
âCrosswords never hit with me,â says Gage, with a laugh. âI dragged myself kicking and screaming into this one.â
Itâs not about âWhat random box of words will you get?â but, âWhat are the decisions youâll make as a player?â
Jack Schlesinger, Knotwords
In fact, Gage and Schlesinger created the Apple Design Award finalist Knotwords â and the Apple Arcade version, Knotwords+ â not to revolutionize the humble crossword but to learn it. âWe know people like crosswords,â says Schlesinger, âso we wanted to figure out what we were missing.â And the process didnât just result in a new game â it led them straight to the secret of word-game design success. âItâs not about âWhat random box of words will you get?ââ says Schlesinger, âbut, âWhat are the decisions youâll make as a player?ââ
Knotwords challenges players to complete a puzzle using only specific letters in specific parts of the board.
Gage and Schlesinger are longtime design partners; in addition to designing Knotwords and Good Sudoku with Gage, Schlesinger contributed to the 2020 reboot of SpellTower and the Apple Arcade title Card of Darkness. Neither came to game design through traditional avenues: Gage has a background in interactive art, while Schlesinger is the coding mastermind with a history in theater and, of all things, rock operas. (Heâs responsible for the note-perfect soundtracks for many of the duoâs games.) And theyâre as likely to talk about the philosophy behind a game as the development of it.
I had been under the mistaken impression that the magic of a simple game was in its simple rule set. The magic actually comes from having an amazing algorithmic puzzle constructor.
Zach Gage
âWhen youâre playing a crossword, youâre fully focused on the clues. Youâre not focused on the grid at all,â explains Gage. âBut when youâre building a crossword, youâre always thinking about the grid. I wondered if there was a way to ask players not to solve a crossword but recreate the grid instead,â he says.
Knotwords lets players use only specific letters in specific sections of the grid â a good idea, but one that initially proved elusive to refine and difficult to scale. âAt first, the idea really wasnât coming together,â says Gage, âso we took a break and built Good Sudoku.â Building their take on sudoku â another game with simple rules and extraordinary complexity â proved critical to restarting Knotwords. âI had been under the mistaken impression that the magic of a simple game was in its simple rule set,â Gage says. âThe magic actually comes from having an amazing algorithmic puzzle constructor.â
An early â and very analog â prototype of Knotwords.
Problematically, they didnât just have one of those just lying around. But they did have Schlesinger. âI said, âI will make you a generator for Knotwords in two hours,ââ Schlesinger laughs. That was maybe a little ambitious. The first version took eight hours and was, by his own account, not great. However, it proved a valuable learning experience. âWe learned that we needed to model a player. What would someone do here? What steps could they take? If they make a mistake, how long would it take them to correct it?â In short, the puzzle generation algorithm needed to take into account not just rules, but also player behavior.
The work provided the duo an answer for why people liked crosswords. It also did one better by addressing one of Gageâs longstanding game-design philosophies. âTo me, the only thing thatâs fun in a game is the process of getting better,â says Gage. âIn every game Iâve made, the most important questions have been: Whatâs the journey that people are going through and how can we make that journey fun? And it turns out it's easy to discover that if I've never played a game before.â
Find Knotwords+ on Apple Arcade
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
WWDC24: June 10-14
Join the worldwide developer community online for a week of technology and creativity.
Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.
Provide your trader status in App Store Connect
To align with the Digital Services Act (DSA) in the European Union (EU), Account Holders and Admins in the Apple Developer Program can now enter their trader status in App Store Connect.
Submission requirementsYouâll need to let us know whether or not youâre a trader to submit new apps to the App Store. If youâre a trader, you may be asked for documentation that verifies your trader contact information.
More options for apps distributed in the European Union
Weâre providing more flexibility for developers who distribute apps in the European Union (EU), including introducing a new way to distribute apps directly from a developerâs website.
More flexibilityDevelopers whoâve agreed to the Alternative Terms Addendum for Apps in the EU have new options for their apps in the EU:
- Alternative app marketplaces. Marketplaces can choose to offer a catalog of apps solely from the developer of the marketplace.
- Linking out to purchase. When directing users to complete a transaction for digital goods or services on an external webpage, developers can choose how to design promotions, discounts, and other deals. The Apple-provided design templates, which are optimized for key purchase and promotional use cases, are now optional.
Web Distribution, available with a software update later this spring, will let authorized developers distribute their iOS apps to EU users directly from a website owned by the developer. Apple will provide authorized developers access to APIs that facilitate the distribution of their apps from the web, integrate with system functionality, back up and restore usersâ apps, and more. For details, visit Getting ready for Web Distribution in the EU.
Uncovering the hidden joys of Finding Hannah
On its surface, Finding Hannah is a bright and playful hidden-object game â but dig a little deeper and youâll find something much more.
The Hannah of Finding Hannah is a 38-year-old Berlin resident trying to navigate career, relationships (including with her best friend/ex, Emma), and the nagging feeling that somethingâs missing in her life. To help find answers, Hannah turns to her nurturing grandmother and free-spirited mother â whose own stories gradually come into focus and shape the gameâs message as well.
âItâs really a story about three women from three generations looking for happiness,â says Franziska Zeiner, cofounder and co-CEO of the Fein Games studio. âFor each one, times are changing. But the question is: Are they getting better?â
Locate hidden objects in this lively Berlin subway scene to move along the story of Finding Hannah.
To move the story along, players comb through a series of richly drawn scenes â a packed club, a bustling train, a pleasantly cluttered bookstore. Locating (and merging) hidden items unlocks new chapters, and the more you find, the more the time-hopping story unfolds. The remarkable mix of message and mechanic made the game a 2023 Apple Design Award finalist, as well as a Cultural Impact winner in the 2023 App Store Awards.
Fein Games is the brainchild of Zeiner and Lea Schönfelder, longtime friends from the same small town in Germany who both pursued careers in game design â despite not being all that into video games growing up. âI mean, at some point I played The Sims as a teenager,â laughs Zeiner, âbut games were rare for us. When I eventually went to study game design, I felt like I didnât really fit in, because my game literacy was pretty limited.â
The goal is to create for people who enjoy authentic female experiences in games.
Lea Schönfelder, cofounder and co-CEO of Fein Games
Cofounder and co-CEO Schönfelder also says she felt like an outsider, but soon found game design a surprisingly organic match for her background in illustration and animation. âIn my early years, I saw a lot of people doing unconventional things with games and thought, âWow, this is really powerful.â And I knew I loved telling stories, maybe not in a linear form but a more systematic way.â Those early years included time with studios like Nerial and ustwo Games, where she worked on Monument Valley 2 and Assemble With Care.
Drawing on their years of experience â and maybe that shared unconventional background â the pair went out on their own to launch Fein Games in 2020. From day one, the studio was driven by more than financial success. âThe goal is to create for people who enjoy authentic female experiences in games,â says Schönfelder. âBut the product is only one side of the coin â thereâs also the process of how you create, and weâve been able to make inclusive games that maybe bring different perspectives to the world.â
Hannah and her free-spirited mother, Sigrid, share an uncomfortable conversation.
Finding Hannah was driven by those perspectives from day one. The story was always meant to be a time-hopping journey featuring women in Berlin, and though it isnât autobiographical, bits and pieces do draw from their creatorsâ lives. âThereâs a scene inspired by my grandmother, who was a nurse during the second world war and would tan with her friends on a hospital roof while the planes circled above,â says Schönfelder. The script was written by Berlin-based author Rebecca Harwick, who also served as lead writer on Juneâs Journey and writer on Switchcraft, The Elder Scrolls Online, and many others.
In the beginning, I felt like I wasnât part of the group, and maybe even a little ashamed that I wasnât as games-literate as my colleagues. But what I thought was a weakness was actually a strength.
Lea Schönfelder, cofounder and co-CEO of Fein Games
To design the art for the different eras, the team tried not to think like gamers. âThe idea was to try to reach people who werenât gamers yet, and we thought weâd most likely be able to do that if we found a style that hadnât been seen in games before,â says Zeiner. To get there, they hired Elena Resko, a Russian-born artist based in Berlin whoâd also never worked in games. âWhat you see is her style,â says Schönfelder. âShe didnât develop that for the game. I think thatâs why it has such a deep level of polish, because Elena has been developing her style for probably a decade now.â
And the hidden-object and merge gameplay mechanic itself is an example of sticking with a proven success. âWhen creating games, you usually want to invent a new mechanic, right?â says Schönfelder. âBut Finding Hannah is for a more casual audience. And itâs been proven that the hidden-object mechanic works. So we eventually said, âWell, maybe we donât need to reinvent the wheel here,ââ she laughs.
The scene in which Hannahâs grandmother sits with friends on the roof was inspired by Lea Schönfelderâs grandmother.
The result is a hidden-object game like none other, part puzzler, part historically flavored narrative, part meditation on the choices faced by women across generations. And it couldnât have come from a team with any other background. âIn the beginning, I felt like I wasnât part of the group, and maybe even a little ashamed that I wasnât as games-literate as my colleagues,â says Schönfelder. âBut what I thought was a weakness was actually a strength. Players donât always play your game like you intended. And I felt a very strong, very sympathetic connection to people, and wanted to make the experience as smooth and accessible as possible. And I think that shows.â
Learn more about Finding Hannah
Download Finding Hannah from the App Store
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Q&A with the Mac notary service team
Security is at the core of every Apple platform. The Mac notary service team is part of Apple Security Engineering and Architecture, and in this Q&A, they share their tips on app distribution and account security to help Mac developers have a positive experience â and protect their users.
When should I submit my new app for notarization?Apps should be mostly complete at the time of notarization. Thereâs no need to notarize an app that isnât functional yet.
How often should I submit my app for notarization?You should submit all versions you might want to distribute, including beta versions. Thatâs because we build a profile of your unique software to help distinguish your apps from other developersâ apps, as well as malware. As we release new signatures to block malware, this profile helps ensure that the software youâve notarized is unaffected.
What happens if my app is selected for additional analysis?Some uploads to the notary service require additional evaluation. If your app falls into this category, rest assured that weâve received your file and will complete the analysis, though it may take longer than usual. In addition, if youâve made changes to your app while a prior upload has been delayed, itâs fine to upload a new build.
What should I do if my app is rejected?Keep in mind that empty apps or apps that might damage someoneâs computer (by changing important system settings without the ownerâs knowledge, for instance) may be rejected, even if theyâre not malicious. If your app is rejected, first confirm that your app doesnât contain malware. Then determine whether it should be distributed privately instead, such as within your enterprise via MDM.
What should I do if my business changes?Keep your developer account details â including your business name, contact info, address, and agreements â up to date. Drastic shifts in account activity or software you notarize can be signs that your account or certificate has been compromised. If we notice this type of activity, we may suspend your account while we investigate further.
Iâm a contractor. What are some ways to make sure Iâm developing responsibly?Be cautious if anyone asks you to:
- Sign, notarize, or distribute binaries that you didnât develop.
- Develop software that appears to be a clone of existing software.
- Develop what looks like an internal enterprise application when your customer isnât an employee of that company.
- Develop software in a high-risk category, like VPNs, system utilities, finance, or surveillance apps. These categories of software have privileged access to private data, increasing the risk to users.
Remember: Itâs your responsibility to know your customer and the functionality of all software you build and/or sign.
What can I do to maintain control of my developer account?Since malware developers may try to gain access to legitimate accounts to hide their activity, be sure you have two-factor authentication enabled. Bad actors may also pose as consultants or employees and ask you to add them to your developer team. Luckily, thereâs an easy solve: Donât share access to your accounts.
Should I remove access for developers who are no longer on my team?Yes. And we can revoke Developer ID certificates for you if you suspect they may have been compromised.
Learn more about notarizationNotarizing macOS software before distribution
Hello Developer: March 2024
Welcome to Hello Developer. In this edition:
- Find out what you can do at the Apple Developer Centers in Bengaluru, Cupertino, Shanghai, and Singapore.
- Learn how the team behind Finding Hannah created a hidden-object game with a meaningful message.
- Get security tips from the Mac notary service team.
- Catch up on the latest news and documentation.
FEATURED
Step inside the Apple Developer CentersThe new Apple Developer Centers are open around the world â and we canât wait for you to come by. With locations in Bengaluru, Cupertino, Shanghai, and now Singapore, Apple Developer Centers are the home bases for in-person sessions, labs, workshops, and consultations around the world.
Whether youâre looking to enhance your existing app or game, refine your design, or launch a new project, thereâs something exciting for you at the Apple Developer Centers. Browse activities in Bengaluru, Cupertino, Shanghai, and Singapore.
BEHIND THE DESIGN
Uncover the hidden joys of Finding HannahOn its surface, Finding Hannah is a bright and playful hidden-object game â but dig a little deeper and youâll find something more. âItâs really a story about three women from three generations looking for happiness,â says Franziska Zeiner, cofounder and co-CEO of the Fein Games studio. âFor each one, times are changing. But the question is: Are they getting better?â Find out how Zeiner and her Berlin-based team created this compelling Apple Design Award finalist.
Uncovering the hidden joys of Finding Hannah View nowQ&A
Get answers from the Mac notary service teamSecurity is at the core of every Apple platform. The Mac notary service team is part of Apple Security Engineering and Architecture, and in this Q&A, they share their tips on app distribution and account security to help Mac developers have a positive experience â and protect their users.
Q&A with the Mac notary service team View nowVIDEOS
Improve your subscriber retention with App Store featuresIn this new video, App Store experts share their tips for minimizing churn and winning back subscribers.
Improve your subscriber retention with App Store features Watch nowGROW YOUR BUSINESS
Make the most of custom product pagesLearn how you can highlight different app capabilities and content through additional (and fully localizable) versions of your product page. With custom product pages, you can create up to 35 additional versions â and view their performance data in App Store Connect.
Plus, thanks to seamless integration with Apple Search Ads, you can use custom product pages to easily create tailored ad variations on the App Store. Read how apps like HelloFresh, Pillow, and Facetune used the feature to gain performance improvements, like higher tap-through and conversion rates.
DOCUMENTATION
Find the details you need in new and updated docs- Create complex materials and effects for 3D content with Shader Graph, a node-based material editor in Reality Composer Pro.
- Use SwiftData to add persistence to your app with minimal code and no external dependencies. Check out new documentation on classes, macros, and structures.
- Learn how to share configurations across Xcode Cloud workflows.
- Explore HIG updates about visionOS support, including new details on immersive experiences, the virtual keyboard, layout, color, and motion.
- New in Technotes: Learn how to identify and handle CloudKit throttles. Plus, find out how to recognize and resolve synchronization issues when working with NSPersistentCloudKitContainer, and how to explore details inside the container by capturing and analyzing a sysdiagnose.
View the full list of new resources
NEWS
Catch up on the latest updates- App Store Connect upload requirement: Starting April 29, 2024, uploaded apps must be built with Xcode 15 for iOS 17, iPadOS 17, tvOS 17, or watchOS 10.
- Updates to support app distribution changes in the European Union: Learn how weâre continuing to provide new ways to understand and utilize these changes.
- App Store Connect update: Learn about changes to app statuses and support for features related to alternative app distribution in the EU.
- App Store Connect API 3.3: Manage distribution keys, alternative distribution packages, and marketplace search for alternative app distribution in the EU.
Want to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
New App Store and iOS data analytics now available
Weâre expanding the analytics available for your apps to help you get even more insight into your business and appsâ performance.
Over 50 new reports are now available through the App Store Connect API to help you analyze your appsâ App Store and iOS performance. These reports include hundreds of new metrics that can enable you to evaluate your performance and find opportunities for improvement. Reports are organized into the following categories:
- App Store Engagement â the number of users on the App Store interacting with a developerâs app or sharing it with others
- App Store Commerce â downloads, sales, pre-orders, and transactions made with the secure App Store In-App Purchase system
- App Usage â active devices, installs, app deletions, and more
- Frameworks Usage â an appâs interaction with OS capabilities, such as PhotoPicker and Widgets
- Performance â how your apps perform and how users interact with specific features
Additionally, new reports are also available through the CloudKit console with data about Apple Push Notifications and File Provider.
- Apple Push Notifications â notification states as they pass through the Apple Push Notification service (APNs)
- File Provider â usage, consistency, and error data
Updates to app distribution in the European Union
Over the past several weeks, weâve communicated with thousands of developers to discuss DMA-related changes to iOS, Safari, and the App Store impacting apps in the European Union. As a result of the valuable feedback received, weâve revised the Alternative Terms Addendum for Apps in the EU to update the following policies and provide developers more flexibility:
- Decisioning by membership: To make it easier for more developers to sign up for the new terms, weâve removed the corporate entity requirement that the Addendum must be signed by each membership that controls, is controlled by, or is under control with another membership. This means an entity can now choose to sign up for the new terms at the developer account level.
- Switching back: To help reduce the risk of unexpected business changes under the new terms, such as reaching massive scale more quickly than anticipated, or if you simply change your mind, weâve created a one-time option to terminate the Addendum under certain circumstances and switch back to Appleâs standard business terms for your EU apps. For details, view the Addendum.
- Alternative app marketplace requirements: To make it easier for developers who want to create alternative app marketplaces, weâve added a new eligibility criteria that lets developers qualify without a stand-by letter of credit. For details, view the marketplace support page.
If youâve already entered into the Addendum, you can sign the updated version here.
The latest OS Release Candidates are now available
You can now submit your apps and games built with Xcode 15.3 and all the latest SDKs for iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, visionOS 1.1, and watchOS 10.4.
Developers who have agreed to the Alternative Terms Addendum for Apps in the EU can now submit apps offering alternative payment options in the EU. They can also now measure the number of first annual installs their apps have accumulated.
If youâd like to discuss changes to iOS, Safari, and the App Store impacting apps in the EU to comply with the Digital Markets Act, request a 30-minute online consultation with an Apple team member.
Updated App Review Guidelines now available
The App Store Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification.
- The title of the document has been changed to App Review Guidelines.
- The Introduction section explains that in the European Union, developers can also distribute notarized iOS apps from alternative app marketplaces. This section provides links to further information about alternative app marketplaces and Notarization for iOS apps.
The following guidelines have been updated:
- 2.3.1: Added that a violation of this rule is grounds for an app being blocked from installing via alternative distribution.
- 2.3.10: Added that developers cannot include names, icons, or imagery of other mobile platforms or alternative app marketplaces in their apps or metadata, unless there is specific, approved interactive functionality.
- 3.1.3(b): Added a link to 3.1.1 to make clear that 3.1.1(a) applies, and multiplatform services apps can use the 3.1.1(a) entitlement.
- 4.8 Login Services: Updated to make clear that the login service cannot collect interactions with your app for advertising purposes without consent. It also adds that another login service is not required if your app is an alternative app marketplace, or an app distributed from an alternative app marketplace, that uses a marketplace-specific login for account, download, and commerce features.
- 5.1.1(viii): Added that apps that compile personal information from any source that is not directly from the user or without the userâs explicit consent, even public databases, are not permitted on alternative app marketplaces.
- 5.4 and 5.5: Updated to state that apps that do not comply with these guidelines will be blocked from installing via alternative distribution.
- Bug Fix Submissions: Added that bug fixes will not be delayed for apps that are already on alternative app marketplaces, except for those related to legal or safety issues.
View the App Review Guidelines
Translations of the guidelines will be available on the Apple Developer website within one month.
Privacy updates for App Store submissions
Developers are responsible for all code included in their apps. At WWDC23, we introduced new privacy manifests and signatures for commonly used third-party SDKs and announced that developers will need to declare approved reasons for using a set of APIs in their appâs privacy manifest. These changes help developers better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users.
Starting March 13: If you upload a new or updated app to App Store Connect that uses an API requiring approved reasons, weâll send you an email letting you know if youâre missing reasons in your appâs privacy manifest. This is in addition to the existing notification in App Store Connect.
Starting May 1: Youâll need to include approved reasons for the listed APIs used by your appâs code to upload a new or updated app to App Store Connect. If youâre not using an API for an allowed reason, please find an alternative. And if you add a new third-party SDK thatâs on the list of commonly used third-party SDKs, these API, privacy manifest, and signature requirements will apply to that SDK. Make sure to use a version of the SDK that includes its privacy manifest and note that signatures are also required when the SDK is added as a binary dependency.
This functionality is a step forward for all apps and we encourage all SDKs to adopt it to better support the apps that depend on them.
App submissions now open for the latest OS releases
Submit in App Store Connect
iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, visionOS 1.1, and watchOS 10.4 will soon be available to customers worldwide. Build your apps and games using the Xcode 15.3 Release Candidate and latest SDKs, then test them using TestFlight. You can submit your iPhone and iPad apps today.
Apps in the European UnionDevelopers whoâve agreed to the Alternative Terms Addendum for Apps in the EU can set up marketplace distribution in the EU. Eligible developers can also submit marketplace apps and offer apps with alternative browser engines.
Once these platform versions are publicly available:
- First annual installs for the Core Technology Fee begin accruing and the new commission rates take effect for these developers.
- Apps offering alternative payment options in the EU will be accepted in App Store Connect. In the meantime, you can test in the sandbox environment.
If youâd like to discuss changes to iOS, Safari, and the App Store impacting apps in the EU to comply with the Digital Markets Act, request a 30-minute online consultation to meet with an Apple team member. In addition, if youâre interested in getting started with operating an alternative app marketplace on iOS in the EU, you can request to attend an in-person lab in Cork, Ireland.
Developer activities youâll love
Apple developer activities are in full swing. Hereâs a look at whatâs happening:
- Join an online session to learn to minimize churn and win back subscribers hosted by App Store experts.
- Celebrate International Womenâs Day with special in-person activities in Bengaluru, Cupertino, Shanghai, Singapore, Sydney, and Tokyo.
- Visit an Apple Vision Pro developer lab in Cupertino, London, Munich, Singapore, Sydney, or Tokyo to test and refine your apps for the infinite canvas.
- Meet with an Apple team member to discuss changes to iOS, Safari, and the App Store impacting apps in the European Union to comply with the Digital Markets Act.
And weâll have lots more activities in store â online, in person, and in multiple languages â all year long.
Q&A with the Apple UX writing team
Writing is fundamental â especially in your apps and games, where the right words can have a profound impact on your experience. During WWDC23, the Apple UX writing team hosted a wide-ranging Q&A that covered everything from technical concepts to inspiring content to whether apps should have âcharacter.â Here are some highlights from that conversation and resources to help you further explore writing for user interfaces.
Writing for interfaces Watch now My app has a lot of text. Whatâs the best way to make copy easier to read?Ask yourself: What am I trying to accomplish with my writing? Once youâve answered that, you can start addressing the writing itself. First, break up your paragraphs into individual sentences. Then, go back and make each sentence as short and punchy as possible. To go even further, you can start each sentence the same way â like with a verb â or add section headers to break up the copy. Or, to put it another way:
Break up your paragraphs into individual sentences.
Make each sentence as short and punchy as possible.
Start each sentence the same way â like with a verb.
Keep other options in mind too. Sometimes it might be better to get your point across with a video or animation. You might also put a short answer first and expand on it elsewhere. That way, youâre helping people who are new to your app while offering a richer option for those who want to dive a little deeper.
Whatâs your advice for explaining technical concepts in simple terms?First, remember that not everyone will have your level of understanding. Sometimes we get so excited about technical details that we forget the folks who might be using an app for the first time.
Try explaining the concept to a friend or colleague first â or ask an engineer to give you a quick summary of a feature.
From there, break down your idea into smaller components and delete anything that isnât absolutely necessary. Technical concepts can feel even more intimidating when delivered in a big block of text. Can you link to a support page? Do people need that information in this particular moment? Offering small bits of information is always a good first step.
How can I harness the âless is moreâ concept without leaving people confused?Clarity should always be the priority. The trick is to make something as long as it needs to be, but as short as it can be. Start by writing everything down â and then putting it away for a few days. When you come back to it, youâll have a clearer perspective on what can be cut.
One more tip: Look for clusters of short words â those usually offer opportunities to tighten things up.
How should I think about writing my onboarding?Naturally, this will depend on your app or game â youâll have to figure out whatâs necessary and right for you. But typically, brevity is key when it comes to text â especially at the beginning, when people are just trying to get into the experience.
Consider providing a brief overview of high-level features so people know why they should use your app and what to expect while doing so. Also, think about how they got there. What text did they see before opening your app? What text appeared on the App Store? All of this contributes to the overall journey.
Human Interface Guidelines: Onboarding
Should UX writing have a personal tone? Or does that make localization too difficult?When establishing your voice and tone, you should absolutely consider adding elements of personality to get the elusive element of âcharacter.â But you're right to consider how your strings will localize. Ideally, youâll work with your localization partners for this. Focus on phrases that strike the tone you want without resorting to idioms. And remember that a little goes a long way.
How should I approach writing inclusively, particularly in conveying gender?This is an incredibly important part of designing for everyone. Consider whether specifying gender is necessary for the experience youâre creating. If gender is necessary, itâs helpful to provide a full set of options â as well as an option to decline the question. Many things can be written without alluding to gender at all and are thus more inclusive. You can also consider using glyphs. SF Symbols provides lots of inclusive options. And you can find more guidance about writing inclusively in the Human Interface Guidelines.
Human Interface Guidelines: Inclusion
What are some best practices for writing helpful notifications?First, keep in mind that notifications can feel inherently interruptive â and that people receive lots of them all day long. Before you write a notification at all, ask yourself these questions:
- Does the message need to be sent right now?
- Does the message save someone from opening your app?
- Does the message convey something you havenât already explained?
If you answered yes to all of the above, learn more about notification best practices in the Human Interface Guidelines.
Human Interface Guidelines: Notifications
Can you offer guidance on writing for the TipKit framework?With TipKit â which displays tips that help people discover features in your app â concise writing is key. Use tips to highlight a brand-new feature in your app, help people discover a hidden feature, or demonstrate faster ways to accomplish a task. Keep your tips to just one idea, and be as clear as possible about the functionality or feature youâre highlighting.
Whatâs one suggestion you would give writers to improve their content?One way we find the perfect (or near-perfect) sentence is to show it to other people, including other writers, designers, and creative partners. If you donât have that option, run your writing by someone else working on your app or even a customer. And you can always read out loud to yourself â itâs an invaluable way to make your writing sound conversational, and a great way to find and cut unnecessary words.
Hello Developer: February 2024
Welcome to the first Hello Developer of the spatial computing era. In this edition: Join us to celebrate International Womenâs Day all over the world, find out how the Fantastical team brought their app to life on Apple Vision Pro, get UX writing advice straight from Apple experts, and catch up on the latest news and documentation.
FEATURED
Join us for International Women's Day celebrationsThis March, weâre honoring International Womenâs Day with developer activities all over the world. Celebrate and elevate women in app development through a variety of sessions, panels, and performances.
FEATURED
âThe best version weâve ever madeâ: Fantastical comes to Apple Vision ProThe best-in-class calendar app Fantastical has 11 years of history, a shelf full of awards, and plenty of well-organized fans on iPad, iPhone, Mac, and Apple Watch. Yet Fantasticalâs Michael Simmons says the app on Apple Vision Pro is âthe best version weâve ever made.â Find out what Simmons learned while building for visionOS â and what advice heâd give fellow developers bringing their apps to Apple Vision Pro.
âThe best version weâve ever madeâ: Fantastical comes to Apple Vision Pro View nowQ&A
Get advice from the Apple UX writing teamWriting is fundamental â especially in your apps and games, where the right words can have a profound impact on your appâs experience. During WWDC23, the Apple UX writing team hosted a wide-ranging Q&A that covered everything from technical concepts to inspiring content to whether apps should have âcharacter.â
Q&A with the Apple UX writing team View nowNEWS
Download the Apple Developer app on visionOSApple Developer has come to Apple Vision Pro. Experience a whole new way to catch up on WWDC videos, browse news and features, and stay up to date on the latest Apple frameworks and technologies.
Download Apple Developer from the App Store
VIDEOS
Dive into Xcode Cloud, Apple Pay, and network selectionThis monthâs new videos cover a lot of ground. Learn how to connect your source repository with Xcode Cloud, find out how to get started with Apple Pay on the Web, and discover how your app can automatically select the best network for an optimal experience.
Connect your project to Xcode Cloud Watch now Get started with Apple Pay on the Web Watch now Adapt to changing network conditions Watch nowBEHIND THE DESIGN
Rebooting an inventive puzzle game for visionOSBringing the mind-bending puzzler Blackbox to Apple Vision Pro presented Ryan McLeod with a challenge and an opportunity like nothing he'd experienced before. Find out how McLeod and team are making the Apple Design Award-winning game come to life on the infinite canvas. Then, catch up on our Apple Vision Pro developer interviews and Q&As with Apple experts.
Blackbox: Rebooting an inventive puzzle game for visionOS View now Apple Vision Pro developer stories and Q&As View nowMEET WITH APPLE EXPERTS
Sign up for developer activitiesThis month, you can learn to minimize churn and win back subscribers in an online session hosted by App Store experts, and meet with App Review to explore best practices for a smooth review process. You can also request to attend an in-person lab in Cork, Ireland, to help develop your alternative app marketplace on iOS in the European Union. View the full schedule of activities.
DOCUMENTATION
Explore and create with new and updated docs- Track specific points in world space: In this new sample app, youâll learn to use world anchors along with an ARKit sessionâs WorldTrackingProvider to create coherence and continuity in a 3D world.
- Explore over 400 newly localized SF symbols: Download the latest version of SF Symbols to browse the updates.
- Preview your app's interface in Xcode: Iterate designs quickly and preview your displays across Apple devices.
- Set up or add a Border Router to your Thread network: Configure a Border Router as a bridge between the Thread and Wi-Fi or Ethernet networks in a home.
View the full list of new resources.
Discover whatâs new in the Human Interface Guidelines.
NEWS
Catch up on the latest updates- Swift Student Challenge applications are open: Learn about past Challenge winners and get everything you need to create an awesome app playground.
- App Store Connect API 3.2: Manage your apps on the App Store for Apple Vision Pro and download new Sales and Trends install reports, including information about historical first annual installs.
- New StoreKit entitlement: If your app offers in-app purchases on the App Store for iPhone or iPad in the United States, you can include a link to your website to let people know of other ways to purchase your digital goods or services.
- New reports and sign-in options: Youâll soon be able to view over 50 new reports to help measure your appsâ performance. And you can take advantage of new flexibility when asking users to sign in to your app.
- App distribution in the European Union: Weâre sharing some changes to iOS, Safari, and the App Store, impacting developersâ apps in the EU to comply with the Digital Markets Act.
- App Store Review Guideline update: Check out the latest changes to support updated policies and provide clarification.
Want to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
âThe best version weâve ever madeâ: Fantastical comes to Apple Vision Pro
The best-in-class calendar app Fantastical has more than a decade of history, a shelf full of awards, and plenty of well-organized fans on iPad, iPhone, Mac, and Apple Watch. Yet Michael Simmons, CEO and lead product designer for Flexibits, the company behind Fantastical, says the Apple Vision Pro app is âthe best version weâve ever made.â We asked Simmons about what heâs learned while building for visionOS, his experiences visiting the developer labs, and what advice heâd give fellow developers bringing their apps to Vision Pro.
What was your initial approach to bringing Fantastical from iPad to Apple Vision Pro?The first thing we did was look at the platform to see if a calendar app made sense. We thought: âCould we do something here thatâs truly an improvement?â When the answer was yes, we moved on to, âOK, what are the possibilities?â And of course, visionOS gives you unlimited possibilities. Youâre not confined to borders; you have the full canvas of the world to create on.
We wanted to take advantage of that infinite canvas. But we also needed to make sure Fantastical felt right at home in visionOS. People want to feel like thereâs a human behind the design â especially in our case, where some customers have been with us for almost 13 years. Thereâs a legacy there, and an expectation that what youâll see will feel connected to what weâve done for more than a decade.
I play guitar, so to me it felt like learning an instrument.
Michael Simmons, CEO and lead product designer for Flexibits
In the end, it all felt truly seamless, so much so that once Fantastical was finished, we immediately said, âWell, letâs do [the companyâs contacts app] Cardhop too!â
Was there a moment when you realized, âWeâve really got something hereâ?It happened as instantly as it could. I play guitar, so to me it felt like learning an instrument. One day it just clicks â the songs, the notes, the patterns â and feels like second nature. For me, it felt like those movies where a musical prodigy feels the music flowing out of them.
How did you approach designing for visionOS?We focused a lot on legibility of the fonts, buttons, and other screen elements. The opaque background didnât play well with elements from other operating systems, for example, so we tweaked it. We stayed consistent with design language, used system-provided colors as much as possible, built using mainly UIKit, and used SwiftUI for ornaments and other fancy Vision Pro elements. Itâs incredible how great the app looked without us needing to rewrite a bunch of code.
How long did the process take?It was five months from first experiencing the device to submitting a beautiful app. Essentially, that meant three months to ramp up â check out the UI, explore what was doable, and learn the tools and frameworks â and two more months to polish, refine, and test. Thatâs crazy fast! And once we had that domain knowledge, we were able to do Cardhop in two months. So Iâd say if you have an iPad app and that knowledge, it takes just months to create a Apple Vision Pro version of your app.
What advice would you give to other developers looking to bring their iPhone or iPad apps to Apple Vision Pro?Make sure your app is appropriate for the platform. Look at the device â all of its abilities and possibilities â and think about how your app would feel with unlimited real estate. And if your app makes sense â and most apps do make sense â and youâre already developing for iPad, iPhone, or Mac, itâs a no-brainer to bring it to Apple Vision Pro.
Updates to support app distribution changes in the European Union
We recently announced changes to iOS, Safari, and the App Store impacting developersâ apps in the European Union (EU) to comply with the Digital Markets Act (DMA), supported by more than 600 new APIs, a wide range of developer tools, and related documentation.
And weâre continuing to provide new ways for developers to understand and utilize these changes, including:
- Online consultations to discuss alternative distribution on iOS, alternative payments on the App Store, linking out to purchase on their webpage, new business terms, and more.
- Labs to help develop alternative app marketplaces on iOS.
Developers who have agreed to the new business terms can now use new features in App Store Connect and the App Store Connect API to set up marketplace distribution and marketplace apps, and use TestFlight to beta test these features. TestFlight also supports apps using alternative browser engines, and alternative payments through payment service providers and linking out to a webpage.
And soon, youâll be able to view expanded app analytics reports for the App Store and iOS.
App Store Connect upload requirement starts April 29
Apps uploaded to App Store Connect must be built with Xcode 15 for iOS 17, iPadOS 17, tvOS 17, or watchOS 10, starting April 29, 2024.
Apply for the Swift Student Challenge now through February 25
Every year, the Swift Student Challenge aims to inspire students to create amazing app playgrounds that can make life better for their communities â and beyond.
Have an app idea thatâs close to your heart? Nowâs your chance to make it happen. Build an app playground and submit by February 25.
All winners receive a year of complimentary membership in the Apple Developer Program and other exclusive awards. And for the first time ever, weâll award a select group of Distinguished Winners a trip to Apple Park for an incredible in-person experience.
Request a consultation about the changes to apps distributed in the European Union
Meet with an Apple team member to discuss changes to iOS, Safari, and the App Store impacting apps in the European Union to comply with the Digital Markets Act. Topics include alternative distribution on iOS, alternative payments in the App Store, linking out to purchase on your webpage, new business terms, and more.
Request a 30-minute online consultation to ask questions and provide feedback about these changes.
In addition, if youâre interested in getting started with operating an alternative app marketplace on iOS in the European Union, you can request to attend an in-person lab in Cork, Ireland.
Blackbox: Rebooting an inventive puzzle game for visionOS
If youâve ever played Blackbox, you know that Ryan McLeod builds games a little differently.
In the inventive iOS puzzler from McLeodâs studio, Shapes & Stories, players solve challenges not by tapping or swiping but by rotating the device, plugging in the USB cable, singing a little tune â pretty much everything except touching the screen.
âThe idea was to get people in touch with the world outside their device,â says McLeod, while ambling along the canals of his Amsterdam home base.
Iâm trying to figure out what makes Blackbox tick on iOS, and how to bring that to visionOS. That requires some creative following of my own rules â and breaking some of them.
Ryan McLeod
In fact, McLeod freed his puzzles from the confines of a device screen well before Apple Vision Pro was even announced â which made bringing the game to this new platform a fascinating challenge. On iOS and iPadOS, Blackbox plays off the familiarity of our devices. But how do you transpose that experience to a device people havenât tried yet? And how do you break boundaries on a canvas that doesnât have any? âI do love a good constraint,â says McLeod, âbut it has been fun to explore the lifting of that restraint. Iâm trying to figure out what makes Blackbox tick on iOS, and how to bring that to visionOS. That requires some creative following of my own rules â and breaking some of them.â
After a brief onboarding, the game becomes an all-new visionOS experience that takes advantage of the spatial canvas right from the first level selection. âI wanted something a little floaty and magical, but still grounded in reality,â he says. âI landed on the idea of bubbles. Theyâre like soap bubbles: Theyâre natural, they have this hyper-realistic gloss, and they move in a way youâre familiar with. The shader cleverly pulls the reflection of your world into them in this really believable, intriguing way.â
And the puzzles within those bubbles? âUnlike Blackbox on iOS, youâre not going to play this when youâre walking home from school or waiting in line,â McLeod says. âIt had to be designed differently. No matter how exciting the background is, or how pretty the sound effects are, itâs not fun to just stare at something, even if itâs bobbing around really nicely.â
Ryan McLeodâs notebook shows pen sketches of what will become Blackbox on Apple Vision Pro.
Now, McLeod cautions that Blackbox is still very much a work in progress, and weâre certainly not here to offer any spoilers. But if you want to go in totally cold, it might be best to skip this next part.
In Blackbox, players interact with the space â and their own senses â to explore and solve challenges. One puzzle involves moving your body in a certain manner; another involves sound, silence, and a blob of molten gold floating like an alien in front of you. A second puzzle involves Morse code. And solving a third puzzle causes part of the scene to collapse into a portal. âSpatial Audio makes the whole thing kind of alarming but mesmerizing,â he says.
There's an advantage to not knowing expected or common patterns.
Ryan McLeod
It's safe to say Blackbox will continue evolving, especially since McLeod is essentially building this plane as heâs flying it â something he views as a positive. âThereâs an advantage to not knowing expected or common patterns,â he says. âThereâs just so much possibility.â
Apple Vision Pro developer stories and Q&As
Meet some of the incredible teams building for visionOS, and get answers from Apple experts on spatial design and creating great apps for Apple Vision Pro.
Developer stories âThe best version weâve ever madeâ: Fantastical comes to Apple Vision Pro View now Blackbox: Rebooting an inventive puzzle game for visionOS View now âThe full impact of fruit destructionâ: How Halfbrick cultivated Super Fruit Ninja on Apple Vision Pro View now Realizing their vision: How djay designed for visionOS View now JigSpace is in the driverâs seat View now PTC is uniting the makers View now Q&As Q&A: Spatial design for visionOS View now Q&A: Building apps for visionOS View nowPrice and tax updates for apps, in-app purchases, and subscriptions
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help ensure that prices for apps and in-app purchases remain consistent across all storefronts.
Price updatesOn February 13, pricing for apps and in-app purchases* will be updated for the Benin, Colombia, Tajikistan, and TĂŒrkiye storefronts. Also, these updates consider the following tax changes:
- Benin: value-added tax (VAT) introduction of 18%
- Tajikistan: VAT rate decrease from 15% to 14%
Prices will be updated on the Benin, Colombia, Tajikistan, and TĂŒrkiye storefronts if you havenât selected one of these as the base for your app or inâapp purchase.*
Prices wonât change on the Benin, Colombia, Tajikistan, or TĂŒrkiye storefront if youâve selected that storefront as the base for your app or in-app purchase.* Prices on other storefronts will be updated to maintain equalization with your chosen base price.
Prices wonât change in any region if your inâapp purchase is an autoârenewable subscription and wonât change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of My Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, inâapp purchases, and autoârenewable subscriptions at any time.
Learn more about managing your pricesView or edit upcoming price changes
Edit your appâs base country or region
Pricing and availability start times by region
Set a price for an in-app purchase
Tax updatesYour proceeds for sales of apps and in-app purchases will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Benin.
On January 30, your proceeds from the sale of eligible apps and inâapp purchases were modified in the following countries to reflect introductions or changes in VAT rates.
- Benin: VAT introduction of 18%
- Czechia: VAT rate decreased from 10% to 0% for certain eBooks and audiobooks
- Czechia: VAT rate increased from 10% to 12% for certain eNewspapers and Magazines
- Estonia: VAT rate increased from 20% to 22%
- Ireland: VAT rate decreased from 9% to 0% for certain eBooks and audiobooks
- Luxembourg: VAT rate increased from 16% to 17%
- Singapore: GST rate increased from 8% to 9%
- Switzerland: VAT rate increased from 2.5% to 2.6% for certain eNewspapers, magazines, books and audiobooks
- Switzerland: VAT rate increased from 7.7% to 8.1% for all other apps and in-app purchases
- Tajikistan: VAT rate decreased from 15% to 14%
*Excludes auto-renewable subscriptions.
Get ready with the latest beta releases
The beta versions of iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, and watchOS 10.4 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.3 beta.
Apple introduces new options worldwide for streaming game services and apps that provide access to mini apps and games
New analytics reports coming in March for developers everywhere
Developers can also enable new sign-in options for their apps
Today, Apple is introducing new options for how apps globally can deliver in-app experiences to users, including streaming games and mini-programs. Developers can now submit a single app with the capability to stream all of the games offered in their catalog.
Apps will also be able to provide enhanced discovery opportunities for streaming games, mini-apps, mini-games, chatbots, and plug-ins that are found within their apps.
Additionally, mini-apps, mini-games, chatbots, and plug-ins will be able to incorporate Appleâs In-App Purchase system to offer their users paid digital content or services for the first time, such as a subscription for an individual chatbot.
Each experience made available in an app on the App Store will be required to adhere to all App Store Review Guidelines and its host app will need to maintain an age rating of the highest age-rated content included in the app.
The changes Apple is announcing reflect feedback from Appleâs developer community and is consistent with the App Storeâs mission to provide a trusted place for users to find apps they love and developers everywhere with new capabilities to grow their businesses. Apps that host this content are responsible for ensuring all the software included in their app meets Appleâs high standards for user experience and safety.
New app analyticsApple provides developers with powerful dashboards and reports to help them measure their appsâ performance through App Analytics, Sales and Trends, and Payments and Financial Reports. Today, Apple is introducing new analytics for developers everywhere to help them get even more insight into their businesses and their appsâ performance, while maintaining Appleâs long-held commitment to ensure users are not identifiable at an individual level.
Over 50 new reports will be available through the App Store Connect API to help developers analyze their app performance and find opportunities for improvement with more metrics in areas like:
Engagement â with additional information on the number of users on the App Store interacting with a developerâs app or sharing it with others;
Commerce â with additional information on downloads, sales and proceeds, pre-orders, and transactions made with the App Storeâs secure In-App Purchase system;
App usage â with additional information on crashes, active devices, installs, app deletions, and more.
Frameworks usage â with additional information on an appâs interaction with OS functionality such as PhotoPicker, Widgets, and CarPlay.
Additional information about report details and access will be available for developers in March.
Developers will have the ability to grant third-party access to their reports conveniently through the API.
More flexibility for sign in options in appsIn line with Appleâs mission to protect user privacy, Apple is updating its App Store Review Guideline for using Sign in with Apple. Sign in with Apple makes it easy for users to sign in to apps and websites using their Apple ID and was built from the ground up with privacy and security in mind. Starting today, developers that offer third-party or social login services within their app will have the option to offer Sign in with Apple, or they will now be able to offer an equivalent privacy-focused login service instead.
Update on apps distributed in the European Union
Weâre sharing some changes to iOS, Safari, and the App Store, impacting developersâ apps in the European Union (EU) to comply with the Digital Markets Act (DMA). These changes create new options for developers who distribute apps in any of the 27 EU member states, and do not apply to apps distributed anywhere else in the world. These options include how developers can distribute apps on iOS, process payments, use web browser engines in iOS apps, request interoperability with iPhone and iOS hardware and software features, access data and analytics about their apps, and transfer App Store user data.
If you want nothing to change for you â from how the App Store works currently in the EU and in the rest of the world â no action is needed. You can continue to distribute your apps only on the App Store and use its private and secure In-App Purchase system.
Updated App Store Review Guidelines now available
The App Store Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification. We now also indicate which guidelines only apply to Notarization for iOS apps in the European Union.
The following guidelines have been divided into subsections for the purposes of Notarization for iOS apps in the EU:
- 2.3.1
- 2.5.16
- 4.1
- 4.3
- 4.6
- 5.1.4
- 5.2.4
The following guidelines have been deleted:
- 2.5.7
- 3.2.2(vi)
- 4.2.4
- 4.2.5
- 4.4.3
2.5.6: Added a link to an entitlement to use an alternative web browser engine in your app in the EU.
3.1.6: Moved to 4.9.
3.2.2(ii): Moved to 4.10.
4.7: Edited to set forth new requirements for mini apps, mini games, streaming games, chatbots, and plug-ins.
4.8: Edited to require an additional login service with certain privacy features if you use a third-party or social login service to set up or authenticate a userâs primary account.
4.9: The original version of this rule (Streaming games) has been deleted and replaced with the Apple Pay guideline.
5.1.2(i): Added that apps may not require users to enable system functionalities (e.g., push notifications, location services, tracking) in order to access functionality, content, use the app, or receive monetary or other compensation, including but not limited to gift cards and codes. A version of this rule was originally published as Guideline 3.2.2(vi).
After You Submit â Appeals: Edited to add an updated link for suggestions for changes to the Guidelines.
The term âauto-renewing subscriptionsâ was replaced with âauto-renewable subscriptionsâ throughout.
Translations of the guidelines will be available on the Apple Developer website within one month.
Swift Student Challenge applications open February 5
Weâre so excited applications for the Swift Student Challenge 2024 will open on February 5.
Looking for some inspiration? Learn about past Challenge winners to gain insight into the motivations behind their apps.
Just getting started? Get tools, tips, and guidance on everything you need to create an awesome app playground.
âThe full impact of fruit destructionâ: How Halfbrick cultivated Super Fruit Ninja on Apple Vision Pro
Fruit Ninja has a juicy history that stretches back more than a decade, but Samantha Turner, lead gameplay programmer at the gameâs Halfbrick Studios, says the Apple Vision Pro version â Super Fruit Ninja on Apple Arcade â is truly bananas. âWhen it first came out, Fruit Ninja kind of gave new life to the touchscreen,â she notes, âand I think we have the potential to do something very special here.â
What if players could squeeze juice out of an orange? What if they could rip apart a watermelon and cover the table and walls with juice?
Samantha Turner, lead gameplay programmer at Halfbrick Studios
Turner would know. Sheâs worked on the Fruit Ninja franchise for nearly a decade, which makes her especially well suited to help grow the game on a new platform. âWe needed to understand how to bring those traditional 2D user interfaces into the 3D space,â she says. âWe were full of ideas: What if players could squeeze juice out of an orange? What if they could rip apart a watermelon and cover the table and walls with juice?â She laughs, on a roll. âWe were really playing with the environment.â
But they also needed to get people into that environment. âThatâs where we came up with the flying menu,â she says, referring to the old-timey home screen thatâll feel familiar to Fruit Ninja fans, except for how it hovers in space. âWe wanted a friendly and welcoming way to bring people into the immersive space,â explains Turner. âBefore we landed on the menu, we were doing things like generating 3D text to put on virtual objects. But that didnât give us the creative freedom we needed to set the theme for our world.â
To create Super Fruit Ninja, the Halfbrick team worked to bring âtraditional 2D interfaces into the 3D space.â
That theme: The good citizens of Fruitasia have discovered a portal to our world â one that magically materializes in the room. âSensei steps right through the portal,â says Turner, âand you can peek back into their world too.â
Next, Turner and Halfbrick set about creating a satisfying â and splashy â way for people to interact with their space. The main question: Whatâs the most logical way to launch fruit at people?
âWe started with, OK, you have a couple meters square in front of you. What will the playspace look like? What if thereâs a chair or a table in the way? How do we work around different scenarios for people in their office or living room or kitchen?â To find their answers, Halfbrick built RealityKit prototypes. âJust being able to see those really opened up the possibilities.â The answer? A set of cannons, arranged in a semicircle at the optimal distance for efficient slashing.
Instead of holding blades, you simply use your hands.
Samantha Turner, lead gameplay programmer at Halfbrick Studios
It also let them move onto the question of how players can carve up a bunch of airborne bananas in a 3D space. The team experimented with a variety of hand motions, but none felt as satisfying as the final result. âInstead of holding blades, you simply use your hands,â she says. âYou become the weapon.â
And youâre a powerful weapon. Slice and dice pineapples and watermelons by jabbing with your hands. Send bombs away by pushing them to a far wall, where they harmlessly explode at a distance. Fire shuriken into floating fruit by brushing your palms in an outward direction â a motion Turner particularly likes. âItâs satisfying to see it up close, but when you see it happen far away, you get the full impact of fruit destruction,â she laughs. All were results of hand gesture explorations.
Truffles the pig awaits his reward in Super Fruit Ninja.
âWe always knew hands would be the center of the experience,â she says. âWe wanted players to be able to grab things and knock them away. And we can tailor the arc of the fruit to make sure it's a comfortable fruit-slicing experience â weâre actually using the vertical position of the device itself to make sure that we're not throwing fruit over your head or too low.â
The result is the most immersive â and possibly most entertaining â Fruit Ninja to date, not just for players but for the creators. âHonestly,â Turner says, âthis version is one of my favorites.â
Realizing their vision: How djay designed for visionOS
Years ago, early in his professional DJ career, Algoriddim cofounder and CEO Karim Morsy found himself performing a set atop a castle tower on the Italian coast. Below him, a crowd danced in the ruins; before him streched a moonlit-drenched coastline and the Mediterranean Sea. âIt was a pretty inspiring environment,â Morsy says, probably wildly underselling this.
Through their app djay, Morsy and Algoriddim have worked to recreate that live DJ experience for nearly 20 years. The best-in-class DJ app started life as boxed software for Mac; subsequent versions for iPad offered features like virtual turntables and beat matching. The app was a smashing success that won an Apple Design Award in both 2011 and 2016.
On Apple Vision Pro, djay transports people to a number of inventive immersive environments.
But Morsy says all that previous work was prologue to djay on the infinite canvas. âWhen we heard about Apple Vision Pro,â he says, âit felt like djay was this beast that wanted to be unleashed. Our vision â no pun intended â with Algoriddim was to make DJing accessible to everyone,â he says. Apple Vision Pro, he says, represents the realization of that dream. âThe first time I experienced the device was really emotional. I wanted to be a DJ since I was a child. And suddenly here were these turntables, and the night sky, and the stars above me, and this light show in the desert. I felt like, âThis is the culmination of everything. This is the feeling Iâve been wanting people to experience.ââ
When we heard about Apple Vision Pro, it felt like djay was this beast that wanted to be unleashed.
Karim Morsy, Algoriddim cofounder and CEO
Getting to that culmination necessitated what Morsy calls âthe wildest sprint of our lives.â With a 360-degree canvas to explore, the team rethought the entire process of how people interacted with djay. âWe realized that with a decade of building DJ interfaces, we were taking a lot for granted,â he says. âSo the first chunk of designing for Apple Vision Pro was going back to the drawing board and saying, âOK, maybe this made sense 10 years ago with a computer and mouse, but why do we need it now? Why should people have to push a button to match tempos â shouldnât that be seamless?â There was so much we could abstract away.â
Spin in a fully immersive environment, or bring your two turntables into the room with you.
They also thought about environments. djay offers a windowed view, a shared space that brings 3D turntables into your environment, and several forms of full immersion. The app first opens to the windowed view, which should feel familiar to anyone whoâs spun on the iPad app: a simple UI of two decks. The volumetric view brings into your room not just turntables, but the appâs key moment: the floating 3D cube that serves as djayâs effects control pad.
But those immersive scenes are where Morsy feels people can truly experience reacting to and feeding off the environment. Thereâs an LED wall that reflects colors from the artwork of the currently playing song, a nighttime desert scene framed by an arena of lights, and a space lounge â complete with dancing robots â that offers a great view of planet Earth. The goal of those environments is to help create the âflow stateâ thatâs sought by live DJs. âYou want to get into a loop where the environment influences you and vice versa,â Morsy says.
From left: Algoriddimâs Karim Morsy, Frederik Seiffert, and Federico Tessmann work on updates to their app with the proper equipment.
In the end, this incredible use of technology serves a very simple purpose: interacting with the music you love. Morsy â a musician himself â points to a piano he keeps in his office. âThat piano has had the same interface for hundreds of years,â he says. âThatâs what weâre trying to reach, that sweet spot between complexity and ease of use. With djay on Vision Pro, itâs less about, âLetâs give people bells and whistles,â and more, âLetâs let them have this experience.ââ
Hello Developer: January 2024
Welcome to Hello Developer. In this Apple Vision Pro-themed edition: Find out how to submit your visionOS apps to the App Store, learn how the team behind djay approached designing for the infinite canvas, and get technical answers straight from Apple Vision Pro engineers. Plus, catch up on the latest news, documentation, and developer activities.
FEATURED
Submit your apps to the App Store for Apple Vision ProApple Vision Pro will have a brand-new App Store, where people can discover and download all the incredible apps available for visionOS. Whether youâve created a new visionOS app or are making your existing iPad or iPhone app available on Apple Vision Pro, hereâs everything you need to know to prepare and submit your app to the App Store.
BEHIND THE DESIGN
Realizing their vision: How djay designed for visionOSAlgoriddim CEO Karim Morsy says Apple Vision Pro represents âthe culmination of everythingâ for his app, djay. In the latest edition of Behind the Design, find out how this incredible team approached designing for the infinite canvas.
Realizing their vision: How djay designed for visionOS View nowQ&A
Get answers from Apple Vision Pro engineersIn this Q&A, Apple Vision Pro engineers answer some of the most frequently asked questions from Apple Vision Pro developer labs all over the world.
Q&A: Building apps for visionOS View nowCOLLECTION
Reimagine your enterprise apps on Apple Vision ProDiscover the languages, tools, and frameworks youâll need to build and test your apps for visionOS. Explore videos and resources that showcase productivity and collaboration, simulation and training, and guided work. And dive into workflows for creating or converting existing media, incorporating on-device and remote assets into your app, and much more.
Reimagine your enterprise apps on Apple Vision Pro View nowMEET WITH APPLE EXPERTS
Submit your request for developer labs and App Review consultationsJoin us this month in the Apple Vision Pro developer labs to get your apps ready for visionOS. With help from Apple, youâll be able to test, refine, and finalize your apps and games. Plus, Apple Developer Program members can check out one-on-one App Review, design, and technology consultations, offered in English, Spanish, Brazilian Portuguese, and more.
DOCUMENTATION
Check out visionOS sample apps, SwiftUI tutorials, audio performance updates, and moreThese visionOS sample apps feature refreshed audio, visual, and timing elements, simplified collision boxes, and performance improvements.
-
Hello World: Use windows, volumes, and immersive spaces to teach people about the Earth.
-
Happy Beam: Leverage a Full Space to create a game using ARKit.
-
Diorama: Design scenes for your visionOS app using Reality Composer Pro.
-
Swift Splash: Use RealityKit to create an interactive ride in visionOS.
And these resources and updated tutorials cover iOS 17, accessibility, Live Activities, and audio performance.
-
SwiftUI Tutorials: Learn the latest best practices for iOS 17.
-
Accessibility Inspector: Review your appâs accessibility experience.
-
Starting and updating Live Activities with ActivityKit push notifications: Use push tokens to update and end Live Activities.
-
Analyzing audio performance with Instruments: Ensure a smooth and immersive audio experience using Audio System Trace.
View the full list of new resources.
Discover whatâs new in the Human Interface Guidelines.
NEWS
Catch up on the latest updates-
Announcing contingent pricing: Give customers discounted pricing when theyâre subscribed to a different subscription on the App Store.
-
Updated agreements and guidelines now available: Check out the latest changes that have been made to support updated policies and provide clarification.
Want to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Q&A: Building apps for visionOS
Over the past few months, Apple experts have fielded questions about visionOS in Apple Vision Pro developer labs all over the world. Here are answers to some of the most frequent questions theyâve been asked, including insights on new concepts like entities, immersive spaces, collision shapes, and much more.
How can I interact with an entity using gestures?There are three important pieces to enabling gesture-based entity interaction:
- The entity must have an InputTargetComponent. Otherwise, it wonât receive gesture input at all.
- The entity must have a CollisionComponent. The shapes of the collision component define the regions that gestures can actually hit, so make sure the collision shapes are specified appropriately for interaction with your entity.
- The gesture that youâre using must be targeted to the entity youâre trying to interact with (or to any entity). For example:
private var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { gestureValue in
let tappedEntity = gestureValue.entity
print(tappedEntity.name)
}
}
Itâs also a good idea to give an interactive entity a HoverEffectComponent, which enables the system to trigger a standard highlight effect when the user looks at the entity.
Should I use a window group, an immersive space, or both?Consider the technical differences between windows, volumes, and immersive spaces when you decide which scene type to use for a particular feature in your app.
Here are some significant technical differences that you should factor into your decision:
- Windows and volumes from other apps the user has open are hidden when an immersive space is open.
- Windows and volumes clip content that exceeds their bounds.
- Users have full control over the placement of windows and volumes. Apps have full control over the placement of content in an immersive space.
- Volumes have a fixed size, windows are resizable.
- ARKit only delivers data to your app if it has an open immersive space.
Explore the Hello World sample code to familiarize yourself with the behaviors of each scene type in visionOS.
How can I visualize collision shapes in my scene?Use the Collision Shapes debug visualization in the Debug Visualizations menu, where you can find several other helpful debug visualizations as well. For information on debug visualizations, check out Diagnosing issues in the appearance of a running app.
Can I position SwiftUI views within an immersive space?Yes! You can position SwiftUI views in an immersive space with the offset(x:y:) and offset(z:) methods. Itâs important to remember that these offsets are specified in points, not meters. You can utilize PhysicalMetric to convert meters to points.
What if I want to position my SwiftUI views relative to an entity in a reality view?Use the RealityView attachments API to create a SwiftUI view and make it accessible as a ViewAttachmentEntity. This entity can be positioned, oriented, and scaled just like any other entity.
RealityView { content, attachments in
// Fetch the attachment entity using the unique identifier.
let attachmentEntity = attachments.entity(for: "uniqueID")!
// Add the attachment entity as RealityView content.
content.add(attachmentEntity)
} attachments: {
// Declare a view that attaches to an entity.
Attachment(id: "uniqueID") {
Text("My Attachment")
}
}
Can I position windows programmatically?
Thereâs no API available to position windows, but weâd love to know about your use case. Please file an enhancement request. For more information on this topic, check out Positioning and sizing windows.
Is there any way to know what the user is looking at?As noted in Adopting best practices for privacy and user preferences, the system handles camera and sensor inputs without passing the information to apps directly. There's no way to get precise eye movements or exact line of sight. Instead, create interface elements that people can interact with and let the system manage the interaction. If you have a use case that you can't get to work this way, and as long as it doesn't require explicit eye tracking, please file an enhancement request.
When are the onHover and onContinuousHover actions called on visionOS?The onHover and onContinuousHover actions are called when a finger is hovering over the view, or when the pointer from a connected trackpad is hovering over the view.
Can I show my own immersive environment textures in my app?If your app has an ImmersiveSpace open, you can create a large sphere with an UnlitMaterial and scale it to have inward-facing geometry:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
do {
// Create the sphere mesh.
let mesh = MeshResource.generateSphere(radius: 10)
// Create an UnlitMaterial.
var material = UnlitMaterial(applyPostProcessToneMap: false)
// Give the UnlitMaterial your equirectangular color texture.
let textureResource = try await TextureResource(named: "example")
material.color = .init(tint: .white, texture: .init(textureResource))
// Create the model.
let entity = ModelEntity(mesh: mesh, materials: [material])
// Scale the model so that it's mesh faces inward.
entity.scale.x *= -1
content.add(entity)
} catch {
// Handle the error.
}
}
}
}
I have existing stereo videos. How can I convert them to MV-HEVC?
AVFoundation provides APIs to write videos in MV-HEVC format. For a full example, download the sample code project Converting side-by-side 3D video to multiview HEV.
To convert your videos to MV-HEVC:
- Create an AVAsset for each of the left and right views.
- Use AVOutputSettingsAssistant to get output settings that work for MV-HEVC.
- Specify the horizontal disparity adjustment and field of view (this is asset specific). Hereâs an example:
var compressionProperties = outputSettings[AVVideoCompressionPropertiesKey] as! [String: Any]
// Specifies the parallax plane.
compressionProperties[kVTCompressionPropertyKey_HorizontalDisparityAdjustment as String] = horizontalDisparityAdjustment
// Specifies the horizontal FOV (90 degrees is chosen in this case.)
compressionProperties[kCMFormatDescriptionExtension_HorizontalFieldOfView as String] = horizontalFOV
- Create an AVAssetWriterInputTaggedPixelBufferGroupAdaptor as the input for your AVAssetWriter.
- Create an AVAssetReader for each of the left and right video tracks.
- Read the left and right tracks, then append matching samples to the tagged pixel buffer group adaptor:
// Create a tagged buffer for each stereoView.
let taggedBuffers: [CMTaggedBuffer] = [
.init(tags: [.videoLayerID(0), .stereoView(.leftEye)], pixelBuffer: leftSample.imageBuffer!),
.init(tags: [.videoLayerID(1), .stereoView(.rightEye)], pixelBuffer: rightSample.imageBuffer!)
]
// Append the tagged buffers to the asset writer input adaptor.
let didAppend = adaptor.appendTaggedBuffers(taggedBuffers,
withPresentationTime: leftSample.presentationTimeStamp)
How can I light my scene in RealityKit on visionOS?
You can light your scene in RealityKit on visionOS by:
- Using a system-provided automatic lighting environment that updates based on real-world surroundings.
- Providing your own image-based lighting via an ImageBasedLightComponent. To see an example, create a new visionOS app, select RealityKit as the Immersive Space Renderer, and select Full as the Immersive Space.
You can create materials with custom shading in Reality Composer Pro using the Shader Graph. A material created this way is accessible to your app as a ShaderGraphMaterial, so that you can dynamically change inputs to the shader in your code.
For a detailed introduction to the Shader Graph, watch Explore materials in Reality Composer Pro.
How can I position entities relative to the position of the device?In an ImmersiveSpace, you can get the full transform of the device using the queryDeviceAnchor(atTimestamp:) method.
Learn more about building apps for visionOS Q&A: Spatial design for visionOS View now Spotlight on: Developing for visionOS View now Spotlight on: Developer tools for visionOS View nowSample code contained herein is provided under the Apple Sample Code License.
Submit your apps to the App Store for Apple Vision Pro
Apple Vision Pro will have a brand-new App Store, where people can discover and download incredible apps for visionOS. Whether youâve created a new visionOS app or are making your existing iPad or iPhone app available on Apple Vision Pro, hereâs everything you need to know to prepare and submit your app to the App Store.
Updated Apple Developer Program License Agreement now available
The Apple Developer Program License Agreement has been revised to support updated policies and provide clarification. The revisions include:
-
Definitions, Section 3.3.3(N): Updated "Tap to Present ID" to "ID Verifier"
-
Definitions, Section 14.10: Updated terms regarding governing law and venue
-
Section 3.3: Reorganized and categorized provisions for clarity
-
Section 3.3.3(B): Clarified language on privacy and third-party SDKs
-
Section 6.7: Updated terms regarding analytics
-
Section 12: Clarified warranty disclaimer language
-
Attachment 1: Updated terms for use of Apple Push Notification Service and Local Notifications
-
Attachment 9: Updated terms for Xcode Cloud compute hours included with Apple Developer Program membership
Announcing contingent pricing for subscriptions
Contingent pricing for subscriptions on the App Store â a new feature that helps you attract and retain subscribers â lets you give customers a discounted subscription price as long as theyâre actively subscribed to a different subscription. It can be used for subscriptions from one developer or two different developers. Weâre currently piloting this feature and will be onboarding more developers in the coming months. If youâre interested in implementing contingent pricing in your app, you can start planning today and sign up to get notified when more details are available in January.
Get ready with the latest beta releases
The beta versions of iOS 17.3, iPadOS 17.3, macOS 14.3, tvOS 17.3, and watchOS 10.3 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.2 beta.
Hello Developer: December 2023
Welcome to Hello Developer. In this edition: Check out new videos on Game Center and the Journaling Suggestions API, get visionOS guidance straight from the spatial design team, meet three App Store Award winners, peek inside the time capsule that is Ancient Board Game Collection, and more.
VIDEOS
Manage Game Center with the App Store Connect APIIn this new video, discover how you can use the App Store Connect API to automate your Game Center configurations outside of App Store Connect on the web.
Manage Game Center with the App Store Connect API Watch nowAnd find out how the new Journaling Suggestions API can help people reflect on the small moments and big events in their lives through your app â all while protecting their privacy.
Discover the Journaling Suggestions API Watch nowQ&A
Get your spatial design questions answeredWhatâs the best way to make a great first impression in visionOS? Whatâs a âkey momentâ? And what are some easy methods for making spatial computing visual design look polished? Get answers to these questions and more.
Q&A: Spatial design for visionOS View nowFEATURED
Celebrate the winners of the 2023 App Store AwardsEvery year, the App Store celebrates exceptional apps that improve peopleâs lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. Find out how the winning teams behind Finding Hannah, Photomator, and Unpacking approached their incredible work this year.
âWeâre trying to drive change": Meet three App Store Award-winning teams View nowMissed the big announcement? Check out the full list of 2023 winners.
NEWS
Xcode Cloud now included with membershipStarting January 2024, all Apple Developer Program memberships will include 25 compute hours per month on Xcode Cloud as a standard, with no additional cost. Learn more.
BEHIND THE DESIGN
Travel back in time with Ancient Board Game CollectionKlemens Strasserâs Ancient Board Game Collection blends the new and the very, very old. Its games date back centuries: Hnefatafl is said to be nearly 1,700 years old, while the Italian game Latrunculi is closer to 2,000. âI found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole,â Strasser says. Find out how the Austria-based developer and a team of international artists gave these ancient games new life.
With Ancient Board Game Collection, Klemens Strasser goes back in time View nowDOCUMENTATION
Get creative with 3D immersion, games, SwiftUI, and moreThis monthâs new sample code, tutorials, and documentation cover everything from games to passing control between apps to addressing reasons for common crashes. Here are a few highlights:
-
Game Center matchmaking essentials, rules, and testing: Learn how to create custom matchmaking rules for better matches between players and test the rules before applying them.
-
Incorporating real-world surroundings in an immersive experience: This sample code project helps you use scene reconstruction in ARKit to give your app an idea of the shape of the personâs surroundings and to bring your app experience into their world.
-
Creating a macOS app: Find out how to bring your SwiftUI app to macOS, including adding new views tailored to macOS and modifying others to work better across platforms.
-
Creating a watchOS app: Find out how to bring your SwiftUI app to watchOS, including customizing SwiftUI views to display the detail and list views on watchOS.
View the full list of new resources.
View whatâs new in the Human Interface Guidelines.
NEWS
Catch up on the latest updates-
App Store holiday schedule: Weâll remain open throughout the holiday season and look forward to accepting your submissions. However, reviews may take a bit longer to complete from December 22 to 27.
-
Sandbox improvements: Now you can change a test accountâs storefront, adjust subscription renewal rates, clear purchase history, simulate interrupted purchase flows directly on iPhone or iPad, and test Family Sharing.
-
New software releases: Build your apps using the latest developer tools and test them on this weekâs OS releases. Download Xcode 15.1 RC, and the RC versions of iOS 17.2, iPadOS 17.2, macOS 14.2, tvOS 17.2, and watchOS 10.2.
Want to get Hello Developer in your inbox? Make sure youâve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWeâd love to hear from you. If you have suggestions for our activities or stories, please let us know.
Q&A: Spatial design for visionOS
Spatial computing offers unique opportunities and challenges when designing apps and games. At WWDC23, the Apple design team hosted a wide-ranging Q&A to help developers explore designing for visionOS. Here are some highlights from that conversation, including insights on the spectrum of immersion, key moments, and sound design.
Whatâs the best way to make a great first impression on this platform?While it depends on your app, of course, starting in a window is a great way to introduce people to your app and let them control the amount of immersion. We generally recommend not placing people into a fully immersive experience right away â itâs better to make sure theyâre oriented in your app before transporting them somewhere else.
What should I consider when bringing an existing iPadOS or iOS app to visionOS?Think about a key moment where your app would really shine spatially. For example, in the Photos app for visionOS, opening a panoramic photo makes the image wrap around your field of view. Ask yourself what that potential key moment â an experience that isnât bound by a screen â is for your app.
From a more tactical perspective, consider how your UI will need to be optimized for visionOS. To learn more, check out âDesign for spatial user interfacesâ.
Design for spatial user interfaces Watch now Can you say a bit more about what you mean by a âkey momentâ?A key moment is a feature or interaction that takes advantage of the unique capabilities of visionOS. (Think of it as a spatial or immersive highlight in your app.) For instance, if youâre creating a writing app, your key moment might be a focus mode in which you immerse someone more fully in an environment or a Spatial Audio soundscape to get them in the creative zone. Thatâs just not possible on a screen-based device.
I often use a grid system when designing for iOS and macOS. Does that still apply here?Definitely! The grid can be very useful for designing windows, and point sizes translate directly between platforms. Things can get more complex when youâre designing elements in 3D, like having nearby controls for a faraway element. To learn more, check out âPrinciples of spatial design.â
Principles of spatial design Watch now Whatâs the best way to test Apple Vision Pro experiences without the device?You can use the visionOS simulator in Xcode to recreate system gestures, like pinch, drag, tap, and zoom.
Whatâs the easiest way to make my spatial computing design look polished?As a starting point, we recommend using the system-provided UI components. Think about hover shapes, how every element appears by default, and how they change when people look directly at them. When building custom components or larger elements like 3D objects, you'll also need to customize your hover effects.
What interaction or ergonomic design considerations should I keep in mind when designing for visionOS?Comfort should guide experiences. We recommend keeping your main content in the field of view, so people don't need to move their neck and body too much. The more centered the content is in the field of view, the more comfortable it is for the eyes. It's also important to consider how you use input. Make sure you support system gestures in your app so people have the option to interact with content indirectly (using their eyes to focus an element and hand gestures, like a pinch, to select). For more on design considerations, check out âDesign considerations for vision and motion.â
Design considerations for vision and motion Watch now Are there design philosophies for fully immersive experiences? Should the content wrap behind the personâs head, above them, and below them?Content can be placed anywhere, but we recommend providing only the amount of immersion needed. Apps can create great immersive experiences without taking over people's entire surroundings. To learn more, check out the Human Interface Guidelines.
Human Interface Guidelines: Immersive experiences
Are there guidelines for creating an environment for a fully immersive experience?First, your environment should have a ground plane under the feet that aligns with the real world. As you design the specifics of your environment, focus on key details that will create immersion. For example, you don't need to render all the details of a real theater to convey the feeling of being in one. You can also use subtle motion to help bring an environment to life, like the gentle movement of clouds in the Mount Hood environment.
What else should I consider when designing for spatial computing?Sound design comes to mind. When designing for other Apple platforms, you may not have placed as much emphasis on creating audio for your interfaces because people often mute sounds on their devices (or it's just not desirable for your current experience). With Apple Vision Pro, sound is crucial to creating a compelling experience.
People are adept at understanding their surroundings through sound, and you can use sound in your visionOS app or game to help people better understand and interact with elements around them. When someone presses a button, for example, an audio cue helps them recognize and confirm their actions. You can position sound spatially in visionOS so that audio comes directly from the item a person interacts with, and the system can use their surroundings to give it the appropriate reverberation and texture. You can even create spatial soundscapes for scenes to make them feel more lifelike and immersive.
For more on designing sound for visionOS, check out âExplore immersive sound design.â
Explore immersive sound design Watch now Learn moreFor even more on designing for visionOS, check out more videos, the Human Interface Guidelines, and the Apple Developer website.
Develop your first immersive app Watch now Get started with building apps for spatial computing Watch now Build great games for spatial computing Watch nowâWeâre trying to drive change": Meet three App Store Award-winning teams
Every year, the App Store Awards celebrate exceptional apps that improve peopleâs lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact.
This yearâs winners were drawn from a list of 40 finalists that included everything from flight trackers to retro games to workout planners to meditative puzzles. In addition to exhibiting an incredible variety of approaches, styles, and techniques, these winners shared a thoughtful grasp and mastery of Apple tools and technologies.
Meet the winners and finalists of the 2023 App Store Awards
For the team behind the hidden-object game Finding Hannah, their win for Cultural Impact is especially meaningful. âWeâre trying to drive change on the design level by bringing more personal stories to a mainstream audience,â says Franziska Zeiner, cofounder and managing director of the Fein Games studio, from her Berlin office. âFinding Hannah is a story that crosses three generations, and each faces the question: How truly free are we as women?â
Finding Hannahâs story is driven by quiet, meaningful interactions between the main character, her mother, and her grandmother.
The Hannah of Finding Hannah is a 39-year-old Berlin resident trying to navigate a career, relationships (including with her best friend/ex, Emma), and the meaning of true happiness. Players complete a series of found-object puzzles that move along the backstory of Hannahâs mother and grandmother to add a more personal touch to the game.
Weâre trying to drive change on the design level by bringing more personal stories to a mainstream audience.
Franziska Zeiner, Fein Games co-founder and managing director
To design the art for the gameâs different time periods, the team tried a different approach. âWe wanted an art style that was something youâd see more on social media than in games,â says Zeiner. âThe idea was to try to reach people who werenât gamers yet, and we thought weâd most likely be able to do that if we found a style that hadnât been seen in games before. And I do think that added a new perspective, and maybe helped us stand out a little bit.â
Learn more about Finding Hannah
Download Finding Hannah from the App Store
Pixelmator, the team behind Mac App of the Year winner Photomator, is no stranger to awards consideration, having received multiple Apple Design Awards in addition to their 2023 App Store Award. The latter is especially meaningful for the Lithuania-based team. âWeâre still a Mac-first company,â says Simonas Bastys, lead developer of the Pixelmator team. âFor what we do, Mac adds so many benefits to the user experience.â
Photomatorâs Smart Deband feature is just one of its many powerful features on Mac.
To start adding Photomator to their portfolio of Mac apps back in 2020, Bastys and his team of engineers decided against porting over their UIKit and AppKit code. Instead, they set out to build Photomator specifically for Mac with SwiftUI. âWe had a lot of experience with AppKit,â Bastys says, âbut we chose to transition to SwiftUI to align with cutting-edge, future-proof technologies.â
The team zeroed in on maximizing performance, assuming that people would need to navigate and manipulate large libraries. They also integrated a wealth of powerful editing tools, such as repairing, debanding, batch editing, and much more. Deciding what to work on â and what to prioritize â is a constant source of discussion. âWe work on a lot of ideas in parallel,â Bastys says, âand what we prioritize comes up very naturally, based on whatâs ready for shipment and what new technology might be coming.â This year, that meant a focus on HDR.
We had a lot of experience with AppKit, but we wanted to create with native Mac technologies.
Simonas Bastys, lead developer of the Pixelmator team
How does Bastys and the Pixelmator team keep growing after so long? âThis is the most exciting field in computer science to me,â says Bastys. âThereâs so much to learn. Iâm only now starting to even understand the depth of human vision and computer image processing. Itâs a continuous challenge. But I see endless possibilities to make Photomator better for creators.â
Download Photomator from the Mac App Store
To create the Cultural Impact winner Unpacking, the Australian duo of creative director Wren Brier and technical director Tim Dawson drew on more than a decade of development experience. Their game â part zen puzzle, part life story â follows a woman through the chapters of her life as she moves from childhood bedroom to first apartment and beyond. Players solve puzzles by placing objects around each new dwelling while learning more about her history with each new level â something Brier says is akin to a detective story.
âYou have this series of places, and youâre opening these hints, and youâre piecing together who this person is,â she says from the pairâs home in Brisbane.
Brier and Dawson are partners who got the idea for Unpacking from â where else? â one of their own early moves. âThere was something gamelike about the idea of finishing one box to unlock the one underneath,â Brier says. âYouâre completing tasks, placing items together on shelves and in drawers. Tim and I started to brainstorm the game right away.â
Unpacking has no visible characters and no dialogue. Its emotionally rich story is told entirely through objects in boxes.
While the idea was technically interesting, says Dawson, the pair was especially drawn to the idea of unpacking as a storytelling vehicle. âThis is a really weird example,â laughs Dawson, âbut thereâs a spatula in the game. Thatâs a pretty normal household item. But what does it look like? Is it cheap plastic, something that maybe this person got quickly? Is it damaged, like theyâve been holding onto it for a while? Is it one of those fancy brands with a rubberized handle? All of that starts painting a picture. It becomes this really intimate way of knowing a character.â
There was something game-like about the idea of finishing one box to unlock the one underneath.
Wren Brier, Unpacking creative director
Those kinds of discussions â spatula-based and otherwise â led to a game that includes novel uses of technology, like the haptic feedback you get when you shake a piggy bank or board game. But its diverse, inclusive story is the reason behind its App Store Award nod for Cultural Impact. Brier and Dawson say players of all ages and backgrounds have shared their love of the game, drawn by the universal experience of moving yourself, your belongings, and your life into a new home. âOne guy even sent us a picture of his bouldering shoes and told us they were identical to the ones in the game,â laughs Brier. âHe said, âI have never felt so seen.ââ
With Ancient Board Game Collection, Klemens Strasser goes back in time
Klemens Strasser will be the first to tell you that prior to launching his Ancient Board Game Collection, he wasnât especially skilled at Hnefatafl. âEverybody knows chess and everybody knows backgammon,â says the indie developer from his home office in Austria, âbut, yeah, I didnât really know that one.â
Today, Strasser runs what may well be the hottest Hnefatafl game in town. The Apple Design Award finalist for Inclusivity Ancient Board Game Collection comprises nine games that reach back not years or decades but centuries â Hnefatafl (or Viking chess) is said to be nearly 1,700 years old, while the Italian game Latrunculi is closer to 2,000. And while games like Konane, Gomoku, and Five Field Kono might not be household names, Strasserâs collection gives them fresh life through splashy visuals, a Renaissance faire soundtrack, efficient onboarding, and even a bit of history.
At roughly 1,700 years old, Hnefatafl is one of the more ancient titles in Klemens Strasserâs Ancient Board Game Collection.
Strasser is a veteran of Flexibits (Fantastical, Cardhop) and the developer behind such titles as Letter Rooms, Subwords and Elementary Minute (for which he won a student Apple Design Award in 2015). But while he was familiar with Nine Menâs Morris â a game popular in Austria heâd play with his grandma â he wasnât exactly well versed in third-century Viking pastimes until a colleague brought Hnefatafl to his attention three years ago. âIt was so different than the traditional symmetric board games I knew,â he says. âI really fell in love with it.â
Less appealing were mobile versions of Hnefatafl, which Strasser found lacking. âThe digital versions of many board games have a certain design,â he says. âItâs usually pretty skeuomorphic, with a lot of wood and felt and stuff like that. That just didnât make me happy. And I thought, âWell, if I canât find one I like, Iâll build it.ââ
I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole.
Klemens Strasser
Using SpriteKit, Strasser began mocking up an iOS Hnefatafl prototype in his downtime. A programmer by trade â âIâm not very good at drawing stuff,â he demurs â Strasser took pains to keep his side project as simple as possible. âI always start with minimalistic designs for my games and apps, but these are games you play with some stones and maybe a piece of paper,â he laughs. âI figured I could build that myself.â
His Hnefatafl explorations came surprisingly fast â enough so that he started wondering what other long-lost games might be out there. âI found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole,â Strasser laughs. âI kept saying, âOh, thatâs an interesting game, and thatâs also an interesting game, and thatâs another interesting game.ââ Before he knew it, his simple Hnefatafl mockup had become a buffet of games. âAnd I still have a list of like 20 games Iâd still like to digitize,â he says.
Italian designer Carmine Acierno brought a mosaic-inspired design to Nine Menâs Morris.
For the initial designs of his first few games, Strasser tried to maintain the simple style of his Hnefatafl prototype. âBut I realized that I couldnât really represent the culture and history behind each game in that way,â he says, âso I hired people who live where the games are from.â
Thatâs where Ancient Board Game Collection really took off. Strasser began reaching out to artists from each ancient gameâs home region â and the responses came fast. Out went the minimalist version of Ancient Board Game Collection, in came a richer take, powered by a variety of cultures and design styles. For Hnefatafl, Strasser made a fortuitous connection with Swedish designer Albina Lind. âI sent her a few images of like Vikings and runestones, and in two hours she came up with a design that was better than anything I could have imagined,â he says. âIf I hadnât run into her, I might not have finished the project. But it was so perfect that I had to continue.â
Stockholm-based artist Albina Lind leapt right into designing Hnefatafl. âI instantly thought, âWell, this is my cup of tea,ââ she says.
Lind was a wise choice. The Stockholm-based freelance artist had nearly a decade of experience designing games, including her own Norse-themed adventure, Dragonberg. âI instantly thought, âWell, this is my cup of tea,ââ Lind says. Her first concept was relatively realistic, all dark wood and stone textures, before she settled on a more relaxed, animation-inspired style. âSometimes going unreal, going cartoony, is even more work than being realistic,â she says with a laugh. Lind went on to design two additional ancient games: Dablot, the exact origins of which arenât known but it which first turned up in 1892, and Halatafl, a 14th century game of Scandinavian origin.
Work arrived from around the globe. Italian designer Carmine Acierno contributed a mosaic-inspired version of Nine Menâs Morris; Honolulu-based designer Anna Fujishige brought a traditional Hawaiian flavor to Konane. And while the approach succeeded in preserving more of each gameâs authentic heritage, it did mean iterating with numerous people over numerous emails. One example: Tokyo-based designer Yosuke Ando pitched changing Strasserâs initial designs for the Japanese game Gomoku altogether. âKlemens approached me initially with the idea of the game design to be inspired by ukiyo-e (paintings) and musha-e (woodblocks prints of warriors),â Ando says. âEventually, we decided to focus on samurai warrior armor from musha-e, deconstructing it, and simplifying these elements into the game UI.â
Honolulu-based designer Anna Fujishige brought a traditional Hawaiian flavor to Konane (at left), while the Tokyo designer Yosuke Andoâs ideas for Gomoku were inspired by samurai warrior armor.
While the design process continued, Strasser worked on an onboarding strategy â times nine. As you might suspect, it can be tricky to explain the rules and subtleties of 500-year-old games from lost civilizations, and Strasserâs initial approach â walkthroughs and puzzles designed to teach each game step by step â quickly proved unwieldy. So he went in the other direction, concentrating on writing âvery simple, very understandableâ rules with short gameplay animations that can be accessed at any time. âI picked games that could be explained in like three or four sentences,â he says. âAnd I wanted to make sure it was all accessible via VoiceOver.â
Strasser designed every part of Ancient Board Game Collection with accessibility in mind.
In fact, accessibility remained a priority throughout the entire project. (He wrote his masterâs thesis on accessibility in Unity games.) As an Apple Design Award finalist for Inclusivity, Ancient Board Game Collection shines with best-in-class VoiceOver adoption, as well as support for Reduce Motion, Dynamic Type, and high-contrast game boards. âItâs at least some contribution to making everything better for everyone,â he says.
I picked games that could be explained in like three or four sentences. And I wanted to make sure it was all accessible via VoiceOver.
Klemens Strasser
Ancient Board Game Collection truly is for everyone, and itâs hardly hyperbole to call it a novel way to introduce games like Hnefatafl to a whole new generation of players. âMost people,â he says, âare just surprised that theyâve never heard of these games.â
Learn more about Ancient Board Game Collection
Download Ancient Board Game Collection from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
25 hours of Xcode Cloud now included with the Apple Developer Program
Xcode Cloud, the continuous integration and delivery service built into Xcode, accelerates the development and delivery of high-quality apps. It brings together cloud-based tools that help you build apps, run automated tests in parallel, deliver apps to testers, and view and manage user feedback.
Weâre pleased to announce that as of January 2024, all Apple Developer Program memberships will include 25 compute hours per month on Xcode Cloud as a standard, with no additional cost. If youâre already subscribed to Xcode Cloud for free, no additional action is required on your part. And if you havenât tried Xcode Cloud yet, now is the perfect time to start building your app for free in just a few minutes.
Privacy updates for App Store submissions
Third-party SDK privacy manifest and signatures. Third-party software development kits (SDKs) can provide great functionality for apps; they can also have the potential to impact user privacy in ways that arenât obvious to developers and users. As a reminder, when you use a third-party SDK with your app, you are responsible for all the code the SDK includes in your app, and need to be aware of its data collection and use practices.
At WWDC23, we introduced new privacy manifests and signatures for SDKs to help app developers better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users. Starting in spring 2024, if your new app or app update submission adds a third-party SDK that is commonly used in apps on the App Store, youâll need to include the privacy manifest for the SDK. Signatures are also required when the SDK is used as a binary dependency. This functionality is a step forward for all apps, and we encourage all SDKs to adopt it to better support the apps that depend on them.
Learn more and view list of commonly-used third-party SDKs
New use cases for APIs that require reasons. When you upload a new app or app update to App Store Connect that uses an API (including from third-party SDKs) that requires a reason, youâll receive a notice if you havenât provided an approved reason in your appâs privacy manifest. Based on the feedback we received from developers, the list of approved reasons has been expanded to include additional use cases. If you have a use case that directly benefits users that isnât covered by an existing approved reason, submit a request for a new reason to be added.
Starting in spring 2024, in order to upload your new app or app update to App Store Connect, youâll be required to include an approved reason in the appâs privacy manifest which accurately reflects how your app uses the API.
New design and technology consultations now available
Have questions on designing your app or implementing a technology? Weâre here to help you find answers, no matter where you are in your development journey. One-on-one consultations with Apple experts in December â and newly published dates in January â are available now.
Weâll have lots more consultations and other activities in store for 2024 â online, in person, and in multiple languages.
Get your apps ready for the holidays
The busiest season on the App Store is almost here! Make sure your apps and games are up to date and ready in advance of the upcoming holidays. Weâll remain open throughout the season and look forward to accepting your submissions. On average, 90% of submissions are reviewed in less than 24 hours. However, reviews may take a bit longer to complete from December 22 to 27.
App Store Award winners announced
Join us in celebrating the work of these outstanding developers from around the world!
App Store Award finalists announced
Every year, the App Store celebrates exceptional apps that improve peopleâs lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. This year weâre proud to recognize nearly 40 outstanding finalists. Winners will be announced in the coming weeks.
Optimize your game for Apple platforms
In this series of videos, you can learn how to level up your pro app or game by harnessing the speed and power of Apple platforms. Weâll discover GPU advancements, explore new Metal profiling tools for M3 and A17 Pro, and share performance best practices for Metal shaders.
Explore GPU advancements in M3 and A17 Pro Watch now Discover new Metal profiling tools for M3 and A17 Pro Watch now Learn performance best practices for Metal shaders Watch nowNew to developing games for Apple platforms? Familiarize yourself with the tools and technologies you need to get started.
PTC is uniting the makers
APPLE VISION PRO APPS FOR ENTERPRISE
PTCâs CAD products have been at the forefront of the engineering industry for more than three decades. And the companyâs AR/VR CTO, Stephen Prideaux-Ghee, has too. âIâve been doing VR for 30 years, and Iâve never had this kind of experience before,â he says. âI almost get so blasĂ© about VR. But when I had [Apple Vision Pro] on, walking around digital objects and interacting with others in real time â itâs one of those things that makes you stop in your tracks."
Prideaux-Ghee says Apple Vision Pro offers PTC an opportunity to bring together components of the engineering and manufacturing process like never before. âOur customers either make stuff, or they make the machines that help somebody else make stuff,â says Prideaux-Ghee. And that stuff can be anything from chairs to boats to spaceships. âI can almost guarantee that the chair youâre sitting on is made by one of our customers,â he says.
As AR/VR CTO (which he says means âa fancy title for somebody who comes up with crazy ideas and has a reasonably good chance of implementing themâ), Prideaux-Ghee describes PTCâs role as the connective tissue between the multiple threads of production. âWhen youâve got a big, international production process, it's not always easy for the people involved to talk to each other. Our thought was: âHey, weâre in the middle of this, so letâs come up with a simple mechanism that allows everyone to do so.ââ
Iâve been doing VR for 30 years, and Iâve never had this kind of experience before.
Stephen Prideaux-Ghee, AR/VR CTO of PTC
For PTC, itâs all about communication and collaboration. âYou can be a single user and get a lot of value from our app,â says Prideaux-Ghee, âbut it really starts when you have multiple people collaborating, either in the same room or over FaceTime and SharePlay.â He speaks from experience; PTC has tested its app with everyone in the same space, and spread out across different countries.
"It enables some really interesting use cases, especially with passthrough," says Prideaux-Ghee. "You can use natural human interactions with a remote device."
Development is going fast. In recent weeks, PTC completed a prototype in which changes made on their iPad CAD software immediately reflect in Apple Vision Pro. âBefore, we werenât able to drive from the CAD software,â he explains. âNow, one person can run our CAD software pretty much unmodified and another can see changes instantly in 3D, at full scale. Itâs really quite magical.â
Read moreBusinesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro â and realizing ideas that were never before possible.
JigSpace is in the driverâs seat View nowJigSpace is in the driverâs seat
APPLE VISION PRO APPS FOR ENTERPRISE
Itâs one of the most memorable images from JigSpaceâs early Apple Vision Pro explorations: A life-size Alfa Romeo C43 Formula 1 car, dark cherry red, built to scale, reflecting light from all around, and parked right in the room. The camera pans back over the carâs front wings; a graceful animation shows airflow over the wings and body.
Numa Bertron, cofounder and chief technology officer for JigSpace â the creative and collaborative company that partnered with Alfa Romeo for the model â has been in the driverâs seat for the project from day one and still wasnât quite prepared to see the car in the spatial environment. âThe first thing everyone wanted to do was get in,â he says. âEveryone was stepping over the side to get in, even though you can just, you know, walk through.â
The F1 car is just one component of JigSpaceâs grand plans for visionOS. The company is leaning on the new platform to create avenues of creativity and collaboration never before possible.
Bertron brings up one of JigSpaceâs most notable âJigsâ (the company term for spatial presentations): an incredibly detailed model of a jet engine. âOn iPhone, itâs an AR model that expands and looks awesome, but itâs still on a screen,â he explains. On Apple Vision Pro, that engine becomes a life-size piece of roaring, spinning machinery â one that people can walk around, poke through, and explore in previously unimaginable detail.
âOne of our guys is a senior 3D artist,â says Bertron, âand the first time he saw one of his models in space at scale â and walked around it with his hands free â he actually cried.â
We made that F1 Jig with tools everyone can use.
Numa Bertron, JigSpace cofounder and chief technology officer
Getting there required some background learning. Prior to developing for visionOS, Bertron had no experience with SwiftUI. âWeâd never gone into Xcode, so we started learning SwiftUI and RealityKit. Honestly, we expected some pain. But since everything is preset, we had really nice rounded corners, blur effects, and smooth scrolling right off the bat.â
JigSpace is designing a âfull-on collaboration platform,â says Bertron.
For people whoâve used JigSpace on iOS, the visionOS version will look familiar but feel quite different. âWe asked ourselves: What's the appropriate size for an object in front of you?â asks Bertron. âWhatâs comfortable? Will that model be on the table or on the floor? Spatial computing introduces so many more opportunities â and more decisions.â
In the case of the F1 example, it also offers a chance to level up visually. âFor objects that big, weâd never been able to achieve this level of fidelity on smaller devices, so we always had to compromise,â says Bertron. In visionOS, they were free to keep adding. âWeâd look at a prototype and say, âWell, this still runs, so letâs double the size of the textures and add more screws and more effects!ââ (Itâs not just about functionality, but fun as well. You can remove a piece of the car â like a full-sized tire â and throw it backwards over your head.)
The incredible visual achievement is matched by new powers of collaboration. âIf I point at the tire, the other person sees me, no matter where they are,â says Bertron. âI can grab the wheel and give it to them. I can circle something we need to fix, I can leave notes or record audio. Itâs a full-on collaboration platform.â And itâs also for everyone, not just F1 drivers and aerospace engineers. âWe made that F1 Jig with tools everyone can use.â
Download JigSpace from the App Store
Read moreBusinesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro â and realizing ideas that were never before possible.
PTC is uniting the makers View nowThe âsweet, creativeâ world of Kimono Cats
Games simply donât get much cuter than Kimono Cats, a casual cartoon adventure about two cats on a date (awww) that creator Greg Johnson made as a present for his wife. âI wanted to make a game she and I could play together,â says the Maui-based indie developer, âand I wanted it to be sweet, creative, and romantic.â
Kimono Cats is all three, and itâs also spectacularly easy to play and navigate. This Apple Design Award finalist for Interaction in games is set in a Japanese festival full of charming mini-games â darts, fishing, and the like â that are designed for maximum simplicity and casual fun. Players swipe up to throw darts at balloons that contain activities, rewards, and sometimes setbacks that threaten to briefly derail the date. Interaction gestures (like scooping fish) are simple and rewarding, and the gameplay variation and side activities (like building a village for your feline duo) fit right in.
âI wanted something sweet, creative, and romantic,â says Kimono Cats developer Greg Johnson.
âIâm a huge fan of Hayao Miyazaki and that kind of heartfelt, slower-paced style,â says Johnson. âWhat you see in Kimono Cats is a warmth and appreciation for Japanese culture.â
You also see a game thatâs a product of its environment. Johnsonâs been creating games since 1983 and is responsible for titles like Starfight, ToeJam and Earl, Doki-Doki Universe, and many more. His wife, Sirena, is a builder of model houses â miniature worlds not unlike the village in Kimono Cats. And the gameâs concept was a reaction to the early days of COVID-19 lockdowns. âWhen we started building this in 2020, everybody was under so much weight and pressure,â he says. âWe felt like this was a good antidote.â
Early Kimono Cats sketches show how the charactersâ cute look was established early in the design process.
To start creating the game, Johnson turned to artist and longtime collaborator Ferry Halim, as well as Tanta Vorawatanakul and Ferrari Duanghathai, a pair of developers who happen to be married. âTanta and Ferrari would provide these charming little characters, and Ferry would come in to add animations â like moving their eyes,â says Johnson. âWe iterated a lot on animating the bubbles â how fast they were moving, how many there were, how they were obscured. That was the product of a lot of testing and listening all throughout the development process.â
When we started with this in 2020, everybody was under so much weight and pressure. We felt like this was a good antidote.
Greg Johnson, Kimono Cats
Johnson notes that players can select characters without gender distinction â a detail that he and the Kimono Cats team prioritized from day one. âWhenever any companion kisses the player character on the cheek, a subtle rainbow will appear in the sky over their heads,â Johnson says. âThis allows the gender of the cat characters to be open to interpretation by the users.â
Kimono Cats was designed with the simple goal of bringing smiles. âThe core concept of throwing darts at bubbles isn't an earth-shaking idea by any stretch,â says Johnson, âbut it was a way to interact with the storytelling that I hadnât seen before, and the festival setting felt like a natural match.â
Find Kimono Cats on Apple Arcade
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Spotlight on: Apple Vision Pro apps for enterprise
Businesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro â and realizing ideas that were never before possible. We caught up with two of those companies â JigSpace and PTC â to find out how theyâre approaching the new world of visionOS.
JigSpace is in the driverâs seat View now PTC is uniting the makers View nowReimagine your enterprise apps on Apple Vision Pro
Discover the languages, tools, and frameworks youâll need to build and test your apps in visionOS. Explore videos and resources that showcase productivity and collaboration, simulation and training, and guided work. And dive into workflows for creating or converting existing media, incorporating on-device and remote assets into your app, and much more.
Apple Vision Pro at work Keynote Watch now Keynote (ASL) Watch now Platforms State of the Union Watch now Platforms State of the Union (ASL) Watch now Design for Apple Vision ProWWDC sessions
Design for spatial input Watch now Design for spatial user interfaces Watch now Principles of spatial design Watch now Design considerations for vision and motion Watch now Explore immersive sound design Watch nowSample code, articles, documentation, and resources
Developer paths to Apple Vision ProWWDC sessions
Go beyond the window with SwiftUI Watch now Meet SwiftUI for spatial computing Watch now Meet ARKit for spatial computing Watch now Whatâs new in SwiftUI Watch now Discover Observation in SwiftUI Watch now Enhance your spatial computing app with RealityKit Watch now Build spatial experiences with RealityKit Watch now Evolve your ARKit app for spatial experiences Watch now Create immersive Unity apps Watch now Bring your Unity VR app to a fully immersive space Watch now Meet Safari for spatial computing Watch now Rediscover Safari developer features Watch now Design for spatial input Watch now Explore the USD ecosystem Watch now Explore USD tools and rendering Watch nowSample code, articles, documentation, and resources
Unity â XR Interaction Toolkit package
Unity â How Unity builds applications for Apple platforms
three.js â webGL and WebXR library
babylon.js â webGL and WebXR library
PlayCanvas â webGL and WebXR library
AOUSD â Alliance for OpenUSD
Immersiveweb â WebXR Device API
WebKit.org â Bug tracking for WebKit open source project
Frameworks to exploreWWDC sessions
Discover streamlined location updates Watch now Meet Core Location Monitor Watch now Meet MapKit for SwiftUI Watch now What's new in MapKit Watch now Build spatial SharePlay experiences Watch now Share files with SharePlay Watch now Design spatial SharePlay experiences Watch now Discover Quick Look for spatial computing Watch now Create 3D models for Quick Look spatial experiences Watch now Explore pie charts and interactivity in Swift Charts Watch now Elevate your windowed app for spatial computing Watch now Create a great spatial playback experience Watch now Deliver video content for spatial experiences Watch nowSample code, articles, documentation, and resources
Placing content on detected planes
Incorporating real-world surroundings in an immersive experience
Tracking specific points in world space
Tracking preregistered images in 3D space
Explore a location with a highly detailed map and Look Around
Drawing content in a group session
Supporting Coordinated Media Playback
Adopting live updates in Core Location
Monitoring location changes with Core Location
Access enterprise data and assetsWWDC sessions
Meet Swift OpenAPI Generator Watch now Advances in Networking, Part 1 Watch now Advances in App Background Execution Watch now The Push Notifications primer Watch now Power down: Improve battery consumption Watch now Build robust and resumable file transfers Watch now Efficiency awaits: Background tasks in SwiftUI Watch now Use async/await with URLSession Watch now Meet SwiftData Watch now Explore the USD ecosystem Watch now Whatâs new in App Store server APIs Watch nowSample code, articles, documentation, and resources
Announcing the Swift Student Challenge 2024
Apple is proud to support and uplift the next generation of student developers, creators, and entrepreneurs. The Swift Student Challenge has given thousands of students the opportunity to showcase their creativity and coding capabilities through app playgrounds, and build real-world skills that they can take into their careers and beyond. From connecting their peers to mental health resources to identifying ways to support sustainability efforts on campus, Swift Student Challenge participants use their creativity to develop apps that solve problems theyâre passionate about.
Weâre releasing new coding resources, working with community partners, and announcing the Challenge earlier than in previous years so students can dive deep into Swift and the development process â and educators can get a head start in supporting them.
Applications will open in February 2024 for three weeks.
New for 2024, out of 350 overall winners, weâll recognize 50 Distinguished Winners for their outstanding submissions and invite them to join us at Apple in Cupertino for three incredible days next summer.
Over 30 new developer activities now available
Ready to level up your app or game? Join us around the world for a new set of developer labs, consultations, sessions, and workshops, hosted in person and online throughout November and December.
You can explore:
- App Store activities: Learn about discovery, engagement, in-app events, custom product pages, subscription best practices, and much more.
- Apple Vision Pro developer labs: Apply to attend a lab in Cupertino, London, Munich, New York City, Shanghai, Singapore, Sydney, or Tokyo.
- Apple Vision Pro activities: Learn to design and build an entirely new universe of apps and games for visionOS.
- Design and technology consultations: Sign up for one-on-one guidance on app design, technology implementation, and more.
Discover activities in multiple time zones and languages.
Tax updates for apps, in-app purchases, and subscriptions
The App Storeâs commerce and payments system was built to enable you to conveniently set up and sell your products and services on a global scale in 44 currencies across 175 storefronts. Apple administers tax on behalf of developers in over 70 countries and regions and provides you with the ability to assign tax categories to your apps and inâapp purchases.
Periodically, we make updates to rates, categories, and agreements to accommodate new regulations and rate changes in certain regions. As of today, the following updates have been made in App Store Connect.
Tax ratesYour proceeds from the sale of eligible apps and inâapp purchases (including autoârenewable subscriptions) have been increased to reflect the following reduced value-added tax (VAT) rates. Prices on the App Store havenât changed.
- Austria: Reduced VAT rates for certain apps in the Video tax category
- Cyprus: Reduced VAT rate of 3% for certain apps in the following tax categories: Books, News Publications, Audiobooks, Magazines and other periodicals
- Vietnam: Eliminated VAT for certain apps in the following tax categories: Books, News Publications, Magazines, and other periodicals
- New Boosting category: Apps and/or in-app purchases that offer resources to provide exposure, visibility, or engagement to enhance the prominence and reach of specific content thatâs experienced or consumed in app (such as videos, sales of âboostsâ in social media apps, listings, and/or other forms of user-generated content).
- New attribute for books: Textbook or other educational publication used for teaching and studying between ages 5 to 18
- New attributes for videos: Exclusively features live TV broadcasting and/or linear programming. Public TV broadcasting, excluding shopping or infomercial channels.
If any of these categories or attributes are relevant to your apps or in-app purchases, you can review and update your selections in the Pricing and Availability section of My Apps.
Learn about setting tax categories
Paid Applications Agreement- Exhibit C Section 1.2.2: Updated language to clarify the goods and services tax (GST) requirements for developers on the Australia storefront.
Get ready with the latest beta releases
The beta versions of iOS 17.2, iPadOS 17.2, macOS 14.2, tvOS 17.2, and watchOS 10.2 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.1 beta.
To check if a known issue from a previous beta release has been resolved or if thereâs a workaround, review the latest release notes. Please let us know if you encounter an issue or have other comments. We value your feedback, as it helps us address issues, refine features, and update documentation.
TestFlight makes it even simpler to manage testers
TestFlight provides an easy way to get feedback on beta versions of your apps, so you can publish on the App Store with confidence. Now, improved controls in App Store Connect let you better evaluate tester engagement and manage participation to help you get the most out of beta testing. Sort testers by status and engagement metrics (like sessions, crashes, and feedback), and remove inactive testers who havenât engaged. You can also filter by device and OS, and even select relevant testers to add to a new group.
Scary fast.
Watch the October 30 event at apple.com.
New delivery metrics now available in the Push Notifications Console
The Push Notifications Console now includes metrics for notifications sent in production through the Apple Push Notification service (APNs). With the consoleâs intuitive interface, youâll get an aggregated view of delivery statuses and insights into various statistics for notifications, including a detailed breakdown based on push type and priority.
Introduced at WWDC23, the Push Notifications Console makes it easy to send test notifications to Apple devices through APNs.
Apple Vision Pro developer labs expand to New York City and Sydney
Weâre thrilled with the excitement and enthusiasm from developers around the world at the Apple Vision Pro developer labs, and weâre pleased to announce new labs in New York City and Sydney. Join us to test directly on the device and connect with Apple experts for help with taking your visionOS, iPadOS, and iOS apps even further on this exciting new platform. Labs also take place in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.
Learn about other ways to work with Apple to prepare for visionOS.
Recent content on Mobile A11y
iOS Accessibility Values
For iOS, Accessibility values are one of the building blocks of how Accessibility works on the platform, along with traits, labels, hints, and showing/hiding elements. If youâre familiar with WCAG or web accessibility, accessibility values are the value part of WCAG 4.1.2: Name, Role, Value. Values Not every element in your view will have a value - in fact, most wonât. Any element that âcontainsâ some data, data that is not included in the elementâs label requires an accessibility value to be set.
iOS UIKit Accessibility traits
Accessibility traits on iOS is the system by which assistive technologies know how to present your interface to your users. The exact experience will vary between assistive technologies, in some cases they may change the intonation of what VoiceOver reads, or add additional options for navigation, sometimes they will disable that assistive technology from accessing the element, or change how the assistive tech functions. They are the âRoleâ part of the fundamental rule of making something accessible to screen readers - WCAGâs 4.
iOS Custom Accessibility Actions
When testing your app with VoiceOver or Switch Control, a common test is to ensure you can reach every interactive element on screen. If these assistive technologies canât focus all of your buttons how will your customers be able to interact fully with your app? Except there are times when hiding buttons from your assistive technology users is the better choice. Consider an app with a table view that has many repeating interactive elements - this could be a social media app where âlike, share, replyâ etc is repeated for each post.
Test Your App's Accessibility with Evinced
Disclosure: Evinced has paid for my time in writing this blog, and I have provided them feedback on the version of their tool reviewed and an early beta. I agreed to this because I believe in the product they are offering. Testing your app for accessibility is an essential part of making an accessible app, as with any part of the software you build, if you donât test it, how can you be sure it works?
How Do I Get My App an Accessibility Audit?
This is a common question I get asked - how do I go about arranging an accessibility audit for my app so I know where I can make improvements? If youâre truly looking for an answer to that question then I have a few options for you below, but first, are you asking the right question? Accessibility Isnât About Box Ticking You canât make your app accessible by getting a report, fixing the findings, and accepting it as done.
Quick Win - Start UI Testing
Iâll admit, adding UI testing to an app that currently doesnât have it included is probably stretching the definition of quick win, but the aim here isnât 100% coverage - not right away anyway. Start small and add to your test suite as you gain confidence. Even a small suite of crucial happy-path UI tests will help to ensure and persist accessibility in your app. And the more you get comfortable with UI tests the more accessible your apps will become, because an app that is easy to test is also great for accessibility.
Quick Win - Support Dark Mode
Many people donât realise dark mode is an accessibility feature. Itâs often just considered a nice to have, a cool extra feature that power users will love. But dark mode is also a valuable accessibility feature. Some types of visual impairment can make it painful to look at bright colours, or large blocks of white might wash over the black text. Some people with dyslexia or Irlenâs Syndrome can struggle to read black text on a white background.
Quick Win - Support Landscape
If you have a regulatory requirement to provide accessibility in your app (spoiler, you do) the chances are it will say you have a requirement to reach WCAG AA. While this is likely meaningless to anyone other an accessibility professionals, in short it means you are providing the minimum level of accessibility features required to make your app usable by the majority of people. This post is about one such requirement, the jazzily titled Success Criterion 1.
Quick Win - Image Descriptions
Images are a major part of our apps. They add meaning and interest, they give your app character and context. The adage is that a picture is worth a thousand words. But if you canât see the image clearly, how do you know what those words are? If you arenât providing image descriptions in your app many of your users will be missing out on the experience youâve crafted. The result can be an app thats missing that spark an character, or worse an app thats just meaningless and unusable.
Quick Win - Text Contrast
How many shades of grey do you use in your app? OK, maybe thats a bit cruel towards designers, grey is a great colour, but the problem with grey is that it can be deceptively difficult to distinguish from a background. And this problem is not just limited to greys - lighter colours too can blend into the background. This effect can be heightened too for people who have blurred or obscured vision, or one of many forms of colour blindness.
iOS 14: Custom Accessibility Content
Each year at WWDC Xcode Santa brings us exciting new APIs to play with, and this year our accessibility present is Customized Accessibility Content. This API flew under the radar a little, Iâm told this is because itâs so new there wasnât even time for inclusion at WWDC. But this new feature helps to solve a difficult question when designing a VoiceOver interface - where is the balance between too much and too little content.
Accessibility Review: Huh? - International languages
The Accessibility Review series uses real world apps to provide examples of common accessibility issues and provide tips on how to fix them. Each of the developers has kindly volunteered their app to be tested. Huh? is a dictionary and thesaurus app from Peter Yaacoub. Enter a word into the search bar then choose a dictionary service. Press search and the app will present your chosen serviceâs entry for the term you entered.
Accessibility Review: Figure Case - Button Labels
The Accessibility Review series uses real world apps to provide examples of common accessibility issues and provide tips on how to fix them. Each of the developers has kindly volunteered their app to be tested. Figure Case is an app to help organise a tabletop miniature collection created by Simon Nickel. The app helps to track miniatures you own, and what state they currently find themselves in - unassembled, assembled, or painted.
Accessibility Review: Daily Dictionary - Screen changes
The Accessibility Review series uses real world apps to provide examples of common accessibility issues and provide tips on how to fix them. Each of the developers has kindly volunteered their app to be tested. Daily Dictionary is an app from Benjamin Mayo providing a new word every day with definitions and real-world uses designed to help increase your vocabulary. Assessing the app, I noticed Benjamin has made a design decision around presenting the appâs settings.
iOS Attributed Accessibility Labels
Attributed accessibility labels are an incredible tool for making some next-level accessible experiences. They let you tell VoiceOver not just what to speak, but how to say it too. Using the accessibilityAttributedLabel property you can provide an NSAttributedString to VoiceOver, much the same way you would provide an NSAttributedString to a labelâs attributedText property to display a string with an underline or character colour for example. The difference here is that all of our attributes are instructions for VoiceOver.
Writing Great iOS Accessibility Labels
A good accessibility label lets your customer know exactly what a control does in as few words as possible, without having to rely on implied context. Donât Add the Element Type iOS already knows your button is a button and your image is an image, it does this using an accessibility trait. If you label your button as âPlay buttonâ your VoiceOver customers will hear âPlay button. Button.â Keep it Succinct Donât frustrate your customer by adding too much information to your labels.
When to use Accessibility Labels
Thereâs a few circumstances when youâll want to set your own accessibility label, such as⊠An interactive element that you havenât given a text value to, such as an image button. An interactive element with a long visual label. An interactive element with a short visual label that takes context from your design. A control or view you have created yourself or built by combining elements. Images of text. Elements Without a text value Take the controls for a music player as an example.
iOS Accessibility Labels
This blog was inspired by Jeff Watkinsâ series of blogs on UIButton. UIButton is a fundamental part of building interfaces on iOS. So much so, that it probably doesnât get the love it deserves. But itâs also really powerful and customisable when used correctly. Accessibility labels on iOS I feel are very similar. Theyâre fundamental to how accessibility works on iOS, yet I think they suffer from a few PR issues.
A11y Box Android
A few months ago I shared a project Iâd been working on for iOS exploring the accessibility API available on that platform. The Android accessibility API is equally large and full featured, and really deserves the same treatment. So hereâs A11y Box for Android. A11y Box for Android is an exploration of what is available on the Android accessibility api and how you can make use of it in your apps.
Mobile A11y Talk: Accessibility in SwiftUI
I was supposed to be attending the 2020 CSUN Assistive Technology conference to present a couple of talks, unfortunately with COVID-19 starting to take hold at that time, I wasnât able to attend. In lieu of attending I decided to record one of the talks I was scheduled to present on Accessibility in SwiftUI. SwiftUI is Appleâs new paradigm for creating user interfaces on Apple platforms, and it has a bunch of new approaches that really help create more accessible experiences.
A11y Box iOS
iOSâ UIAccessibility API is huge. I like to think I know it pretty well, but Iâm always being surprised by discovering features I previously had no idea about. Like many things on iOS, the documentation for UIAccessibility is not always complete, even for parts of the API that have been around for years. In an attempt to help spread the knowledge of some of the awesome things UIAccessibility is capable of, Iâve created A11y Box for iOS.
Android Live Regions
Live Regions are one of my favourite accessibility features on Android. Theyâre a super simple solution to a common accessibility problem that people with visual impairments can stumble across. Say you have a game app, really any type of game. Your user interacts with the play area, and as they do, their score increases or decreases depending on your customerâs actions. In this example, the score display is separate to the element your customer is interacting with.
A11yUITests: An XCUI Testing library for accessibility
A11yUITests is an extension to XCTestCase that adds tests for common accessibility issues that can be run as part of an XCUITest suite. Iâve written a detailed discussion of the tests available if youâre interested in changing/implementing these tests yourself. Alternatively, follow this quick start guide. Getting Started Adding A11yUITests Iâm assuming youâre already familiar with cocoapods, if not, cocoapods.org has a good introduction. There is one minor difference here compared to most cocoapods, weâre not including this pod in our app, but our appâs test bundle.
XCUITests for accessibility
For a while now Iâve been looking at possibilities for automated accessibility testing on iOS. Unfortunately, Iâve not found any option so far that Iâm happy with. I am a big fan of Appleâs XCUI Test framework. Although it has its limitations, I believe thereâs scope for creating valid accessibility tests using this framework. Over the last few months Iâve been trying things out, and hereâs what Iâve come up with.
Resources
This is a personally curated list of resources I have used and think others may find helpful too. Iâm always looking for new high quality mobile accessibility and inclusion resources to add here. Please share any you find with me via email or Twitter. Code Android Android Developers: Build more accessible appsAndroidâs developer documentation for Accessibility, including design, building & testing. With videos, code samples, and documentation. Android: Make apps more accessibleGoogleâs guide to improving accessibility on Android
Review: Accessibility for Everyone - Laura Kalbag
Lauraâs introduction to web accessibility jumped out to me because itâs available as an audiobook. Being dyslexic I struggle to read, so prefer to listen to audiobooks where available. Unfortunately, most technical books arenât available as audiobooks for a couple of potentially obvious reasons. Hearing code or descriptions of diagrams and illustrations read aloud may not be the best experience for an audiobook. As such, this book choses to leave those out of the audio version.
A11y is not accessible
Accessibility is a long word. Itâs not the simplest of words to read or to spell, so it seems like a word that would be a good candidate for abbreviation. The common abbreviation of accessibility is a11y. We take the A and Y from the beginning and end of accessibility, and 11 for the number of letters in between. This abbreviation also creates a pleasing homophone for âally.â The irony of this abbreviation is that a11y isnât accessible.
About Mobile A11y
About Mobile A11y Mobile A11y is a collection of blogs and resources about how we as mobile developers can improve accessibility on mobile devices. From time to time the blog might also touch on related topics such as digital inclusion, and other topics around ethics in technology. The site is aimed at mobile developers and is written by a mobile developer. I hope this means other mobile developers will find the content relatable and engaging, and youâll find learning about mobile accessibility along with me helpful.
SwiftUI Accessibility
Accessibility is important. We can take that as a given. But as iOS devs weâre not always sure how to make the most of the accessibility tools that Apple have provided us. Weâre lucky as iOS developers that we work on such a forward-thinking accessibility platform. Many people consider Appleâs focus on accessibility for iOS as the driver for other technology vendors to include accessibility features as standard. To the point that we now consider accessibility an expected part of any digital platform.
SwiftUI Accessibility: Semantic Views
Semantic views are not new to SwiftUI, but changes in SwiftUI mean creating them is simple. Semantic views are not so much a language feature. Theyâre more a technique for manipulating the accessible user interface and improving the experience for assistive technology users. A what view? A semantic view is not one view, but a collection of views grouped together because they have meaning (or semantic) together. Take a look at this iOS table view cell from the files app.
SwiftUI Accessibility: User Settings
SwiftUI allows us to read environmental values that might affect how we want to present ÂÂour UI. Things like size classes and locale for example. We also get the ability to read some of the userâs chosen accessibility settings allowing us to make decisions that will best fit with your customerâs preference. Why? Before we cover what these options are and how to detect them I think itâs important to briefly cover why we need to detect them.
SwiftUI Accessibility: Attributes
When a customer enables an assistive technology to navigate your app the interface that technology navigates isnât exactly the same as the one visible on the screen. Theyâre navigating a modified version that iOS creates especially for assistive technology. This is known as the accessibility tree or accessible user interface. iOS does an incredible job at creating the AUI for you from your SwiftUI code. We can help iOS in creating this by tweaking some elementâs accessibility attributes.
SwiftUI Accessibility: Traits
Accessibility traits are a group of attributes on a SwiftUI element. They inform assistive technologies how to interact with the element or present it to your customer. Each element has a selection of default traits, but you might need to change these as you create your UI. In SwiftUI there are two modifiers to use for traits, .accessibility(addTraits: ) and .accessibility(removeTraits: ) which add or remove traits respectively. Each modifier takes as its argument either a single accessibility trait or a set of traits.
Review: Design Meets Disability - Graham Pullin
Design Meets Disability was recommended to me by accessibility consultant Jon Gibbins while we were sharing a long train journey through mid-Wales. We were talking, amongst many things, about our love for Apple products and their design. I am a hearing aid wearer, my aid is two-tone grey. A sort of dark taupe grey above, and a darker, almost gun-metal grey below. Thereâs a clear tube into my ear. This is fine, I donât hate it.
Podcast: iPhreaks - iOS Accessibility
I was asked to guest on the iPhreaks podcast to discuss iOS accessibility. We talked about why accessibility is important, how you can improve it in your apps, and some of the changes iOS 13 and SwiftUI bring. unfortunatley iPhreaks donât provide a transcript, but they do provide a comprehensive write-up on their site.
SwiftUI Accessibility: Accessible User Interface
Take a look at your app. Notice the collection of buttons, text, images, and other controls you can see and interact with that make up your appâs user interface. When one of your customers navigates your app with Voice Control, Switch Control, VoiceOver, or any other assistive technology, this isnât the interface theyâre using. Instead, iOS creates a version of your interface for assistive technology to use. This interface is generally known as the accessibility tree.
Mobile A11y Talk: Accessibility without the 'V' Word
I was honoured in 2019 to be able to give my first full conference talk at CodeMobile. I was then lucky enough to be able to repeat that talk at NSLondon, NSManchester, and SWMobile meetups. As an iOS developer, I know accessibility is important for a huge range of people. But at times I think I can treat it like an afterthought. Accessibility Without the âVâ Word covers a skill I think we as software engineers would benefit from developing - empathy towards our users.
SwiftUI Accessibility: Sort Priority
Assistive technology, such as VoiceOver, works in natural reading direction. In English, and most other languages, this means top left through to the bottom right. Mostly this is the right decision for assistive technology to make. This is the order anyone not using assistive technology would experience your app. Sometimes though, we make designs that donât read in this way. By using the .accessibility(sortPriority: ) modifier we can set the order in which assistive technology accesses elements.
SwiftUI Accessibility - Named Controls
One big accessibility improvement in SwiftUI comes in the form of named controls. Nearly all controls and some non-interactive views (see Images) can take a Text view as part of their view builder. The purpose of this is to tie the meaning to the control. Toggle(isOn: $updates) { Text("Send me updates") } Imagine a UIKit layout with a UISwitch control. Weâd most likely right align the switch, and provide a text label to the left.
SwiftUI Accessibility: Dynamic Type
Like all accessibility features, Dynamic Type is about customisability. Many of your customers, and maybe even you, are using Dynamic Type without even considering it an accessibility feature. Dynamic type allows iOS users to set the text to a size that they find comfortable to read. This may mean making it a little larger so itâs easier to read for those of us who havenât yet accepted we might need glasses.
SwiftUI Accessibility: Images
Images in SwiftUI are accessible by default. This is the opposite of what weâd experience in UIKit, where images are not accessible unless you set isAccessibilityElement to true. Sometimes making images not accessible to VoiceOver is the right decision. Like when using a glyph as a redundant way of conveying meaning alongside text. An example of this would be displaying a warning triangle next to the text âErrorâ or a tick next to âsuccessâ.
Baking Digital Inclusion Into Your Mobile Apps
I was asked by Capital One to contribute an accessibility piece to the Capital One Tech Medium. The blog, titled Baking Digital Inclusion Into Your Mobile Apps, briefly covers what we mean by disability and what we can do to make our mobile apps work better for everyone.
What The European Accessibility Act (Might) Mean for Mobile Development
The European Accessibility Act, or EAA is due to become law in Europe later this year, and it defines some specific requirements for mobile. In fact, its the first accessibility legislation that Iâm aware of, anywhere, that explicitly covers mobile apps. Since 2012 the European Union has been working on standardising accessibility legislation across Europe. The ultimate aim is to both improve the experience for those who need to use assistive technology, but also to simplify the rules business need to follow on accessibility.
Building with nightly Swift toolchains on macOS
The Swift website provides nightly builds of the Swift compiler (called toolchains) for download. Building with a nightly compiler can be useful if you want to check if a bug has already been fixed on main, or if you want to experiment with upcoming language features such as Embedded Swift, as Iâve been doing lately. A toolchain is distributed as a .pkg installer that installs itself into /Library/Developer/Toolchains (or the equivalent path in your home directory). After installation, you have several options to select the toolchain you want to build with: In Xcode In Xcode, select the toolchain from the main menu (Xcode > Toolchains), then build and/or run your code normally. Not all Xcode features work with a custom toolchain. For example, playgrounds donât work, and Xcode will always use its built-in copy of the Swift Package Manager, so you wonât be able to use unreleased SwiftPM features in this way. Also, Apple wonât accept apps built with a non-standard toolchain for submission to the App Store. On the command line When building on the command line there are multiple options, depending on your preferences and what tool you want to use. The TOOLCHAINS environment variable All of the various Swift build tools respect the TOOLCHAINS environment variable. This should be set to the desired toolchainâs bundle ID, which you can find in the Info.plist file in the toolchainâs directory. Example (Iâm using a nightly toolchain from 2024-03-03 here): # My normal Swift version is 5.10 $ swift --version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) # Make sure xcode-select points to Xcode, not to /Library/Developer/CommandLineTools # The Command Line Tools will ignore the TOOLCHAINS variable. $ xcode-select --print-path /Applications/Xcode.app/Contents/Developer # The nightly toolchain is 6.0-dev $ export TOOLCHAINS=org.swift.59202403031a $ swift --version Apple Swift version 6.0-dev (LLVM 0c7823cab15dec9, Swift 0cc05909334c6f7) Toolchain name vs. bundle ID I think the TOOLCHAINS variable is also supposed to accept the toolchainâs name instead of the bundle ID, but this doesnât work reliably for me. I tried passing: the DisplayName from Info.plist (âSwift Development Snapshot 2024-03-03 (a)â), the ShortDisplayName (âSwift Development Snapshotâ; not unique if you have more than one toolchain installed!), the directory name, both with and without the .xctoolchain suffix, but none of them worked reliably, especially if you have multiple toolchains installed. In my limited testing, it seems that Swift picks the first toolchain that matches the short name prefix (âSwift Development Snapshotâ) and ignores the long name components. For example, when I select âSwift Development Snapshot 2024-03-03 (a)â, Swift picks swift-DEVELOPMENT-SNAPSHOT-2024-01-30-a, presumably because thatâs the âfirstâ one (in alphabetical order) I have installed. My advice: stick to the bundle ID, it works. Hereâs a useful command to find the bundle ID of the latest toolchain you have installed (you may have to adjust the path if you install your toolchains in ~/Library instead of /Library): $ plutil -extract CFBundleIdentifier raw /Library/Developer/Toolchains/swift-latest.xctoolchain/Info.plist org.swift.59202403031 # Set the toolchain to the latest installed: export TOOLCHAINS=$(plutil -extract CFBundleIdentifier raw /Library/Developer/Toolchains/swift-latest.xctoolchain/Info.plist) xcrun and xcodebuild xcrun and xcodebuild respect the TOOLCHAINS variable too. As an alternative, they also provide an equivalent command line parameter named --toolchain. The parameter has the same semantics as the environment variable: you pass the toolchainâs bundle ID. Example: $ xcrun --toolchain org.swift.59202403031a --find swiftc /Library/Developer/Toolchains/swift-DEVELOPMENT-SNAPSHOT-2024-03-03-a.xctoolchain/usr/bin/swiftc Swift Package Manager SwiftPM also respects the TOOLCHAINS variable, and it has a --toolchains parameter as well, but this one expects the path to the toolchain, not its bundle ID. Example: $ swift build --toolchain /Library/Developer/Toolchains/swift-latest.xctoolchain Missing toolchains are (silently) ignored Another thing to be aware of: if you specify a toolchain that isnât installed (e.g. because of a typo or because youâre trying to run a script that was developed in a different environment), none of the tools will fail: swift, xcrun, and xcodebuild silently ignore the toolchain setting and use the default Swift toolchain (set via xcode-select). SwiftPM silently ignores a missing toolchain set via TOOLCHAINS. If you pass an invalid directory to the --toolchains parameter, it at least prints a warning before it continues building with the default toolchain. I donât like this. Iâd much rather get an error if the build tool canât find the toolchain I told it to use. Itâs especially dangerous in scripts.
How the Swift compiler knows that DispatchQueue.main implies @MainActor
You may have noticed that the Swift compiler automatically treats the closure of a DispatchQueue.main.async call as @MainActor. In other words, we can call a main-actor-isolated function in the closure: import Dispatch @MainActor func mainActorFunc() { } DispatchQueue.main.async { // The compiler lets us call this because // it knows we're on the main actor. mainActorFunc() } This behavior is welcome and very convenient, but it bugs me that itâs so hidden. As far as I know it isnât documented, and neither Xcode nor any other editor/IDE Iâve used do a good job of showing me the actor context a function or closure will run in, even though the compiler has this information. Iâve written about a similar case before in Where View.task gets its main-actor isolation from, where Swift/Xcode hide essential information from the programmer by not showing certain attributes in declarations or the documentation. Itâs a syntax check So how is the magic behavior for DispatchQueue.main.async implemented? It canât be an attribute or other annotation on the closure parameter of the DispatchQueue.async method because the actual queue instance isnât known at that point. A bit of experimentation reveals that it is in fact a relatively coarse source-code-based check that singles out invocations on DispatchQueue.main, in exactly that spelling. For example, the following variations do produce warnings/errors (in Swift 5.10/6.0, respectively), even though they are just as safe as the previous code snippet. This is because we arenât using the âcorrectâ DispatchQueue.main.async spelling: let queue = DispatchQueue.main queue.async { // Error: Call to main actor-isolated global function // 'mainActorFunc()' in a synchronous nonisolated context mainActorFunc() // â } typealias DP = DispatchQueue DP.main.async { // Error: Call to main actor-isolated global function // 'mainActorFunc()' in a synchronous nonisolated context mainActorFunc() // â } I found the place in the Swift compiler source code where the check happens. In the compilerâs semantic analysis stage (called âSemaâ; this is the phase right after parsing), the type checker calls a function named adjustFunctionTypeForConcurrency, passing in a Boolean it obtained from isMainDispatchQueueMember, which returns true if the source code literally references DispatchQueue.main. In that case, the type checker adds the @_unsafeMainActor attribute to the function type. Good to know. Fun fact: since this is a purely syntax-based check, if you define your own type named DispatchQueue, give it a static main property and a function named async that takes a closure, the compiler will apply the same âfixâ to it. This is NOT recommended: // Define our own `DispatchQueue.main.async` struct DispatchQueue { static let main: Self = .init() func async(_ work: @escaping () -> Void) {} } // This calls our DispatchQueue.main.async { // No error! Compiler has inserted `@_unsafeMainActor` mainActorFunc() } Perplexity through obscurity I love that this automatic @MainActor inference for DispatchQueue.main exists. I do not love that itâs another piece of hidden, implicit behavior that makes Swift concurrency harder to learn. I want to see all the @_unsafeMainActor and @_unsafeInheritExecutor and @_inheritActorContext annotations! I believe Apple is doing the community a disservice by hiding these in Xcode. The biggest benefit of Swiftâs concurrency model over what we had before is that so many things are statically known at compile time. Itâs a shame that the compiler knows on which executor a particular line of code will run, but none of the tools seem to be able to show me this. Instead, Iâm forced to hunt for @MainActor annotations and hidden attributes in superclasses, protocols, etc. This feels especially problematic during the Swift 5-to-6 transition phase weâre currently in where itâs so easy to misuse concurrency and not get a compiler error (and sometimes not even a warning if you forget to enable strict concurrency checking). The most impactful change Apple can make to make Swift concurrency less confusing is to show the inferred executor context for each line of code in Xcode. Make it really obvious what code runs on the main actor, some other actor, or the global cooperative pool. Use colors or whatnot! (Other Swift IDEs should do this too, of course. Iâm just picking on Xcode because Apple has the most leverage.)
How the relative size modifier interacts with stack views
And what it can teach us about SwiftUIâs stack layout algorithm I have one more thing to say on the relative sizing view modifier from my previous post, Working with percentages in SwiftUI layout. Iâm assuming youâve read that article. The following is good to know if you want to use the modifier in your own code, but I hope youâll also learn some general tidbits about SwiftUIâs layout algorithm for HStacks and VStacks. Using relative sizing inside a stack view Letâs apply the relativeProposed modifier to one of the subviews of an HStack: HStack(spacing: 10) { Color.blue .relativeProposed(width: 0.5) Color.green Color.yellow } .border(.primary) .frame(height: 80) What do you expect to happen here? Will the blue view take up 50âŻ% of the available width? The answer is no. In fact, the blue rectangle becomes narrower than the others: This is because the HStack only proposes a proportion of its available width to each of its children. Here, the stack proposes one third of the available space to its first child, the relative sizing modifier. The modifier then halves this value, resulting in one sixth of the total width (minus spacing) for the blue color. The other two rectangles then become wider than one third because the first child view didnât use up its full proposed width. Update May 1, 2024: SwiftUIâs built-in containerRelativeFrame modifier (introduced after I wrote my modifier) doesnât exhibit this behavior because it uses the size of the nearest container view as its reference, and stack views donât count as containers in this context (which I find somewhat unintuitive, but thatâs the way it is). Order matters Now letâs move the modifier to the green color in the middle: HStack(spacing: 10) { Color.blue Color.green .relativeProposed(width: 0.5) Color.yellow } Naively, Iâd expect an equivalent result: the green rectangle should become 100âŻpt wide, and blue and yellow should be 250âŻpt each. But thatâs not what happens â the yellow view ends up being wider than the blue one: I found this unintuitive at first, but it makes sense if you understand that the HStack processes its children in sequence: The HStack proposes one third of its available space to the blue view: (620âŻâ 20) / 3 = 200. The blue view accepts the proposal and becomes 200âŻpt wide. Next up is the relativeProposed modifier. The HStack divides the remaining space by the number of remaining subviews and proposes that: 400 / 2 = 200. Our modifier halves this proposal and proposes 100âŻpt to the green view, which accepts it. The modifier in turn adopts the size of its child and returns 100âŻpt to the HStack. Since the second subview used less space than proposed, the HStack now has 300âŻpt left over to propose to its final child, the yellow color. Important: the order in which the stack lays out its subviews happens to be from left to right in this example, but thatâs not always the case. In general, HStacks and VStacks first group their subviews by layout priority (more on that below), and then order the views inside each group by flexibility such that the least flexible views are laid out first. For more on this, see How an HStack Lays out Its Children by Chris Eidhof. The views in our example are all equally flexible (they all can become any width between 0 and infinity), so the stack processes them in their ânaturalâ order. Leftover space isnât redistributed By now you may be able guess how the layout turns out when we move our view modifier to the last child view: HStack(spacing: 10) { Color.blue Color.green Color.yellow .relativeProposed(width: 0.5) } Blue and green each receive one third of the available width and become 200âŻpt wide. No surprises there. When the HStack reaches the relativeProposed modifier, it has 200âŻpt left to distribute. Again, the modifier and the yellow rectangle only use half of this amount. The end result is that the HStack ends up with 100âŻpt left over. The process stops here â the HStack does not start over in an attempt to find a âbetterâ solution. The stack makes itself just big enough to contain its subviews (= 520âŻpt incl. spacing) and reports that size to its parent. Layout priority We can use the layoutPriority view modifier to influence how stacks and other containers lay out their children. Letâs give the subview with the relative sizing modifier a higher layout priority (the default priority is 0): HStack(spacing: 10) { Color.blue Color.green Color.yellow .relativeProposed(width: 0.5) .layoutPriority(1) } This results in a layout where the yellow rectangle actually takes up 50âŻ% of the available space: Explanation: The HStack groups its children by layout priority and then processes each group in sequence, from highest to lowest priority. Each group is proposed the entire remaining space. The first layout group only contains a single view, our relative sizing modifier with the yellow color. The HStack proposes the entire available space (minus spacing) = 600âŻpt. Our modifier halves the proposal, resulting in 300âŻpt for the yellow view. There are 300âŻpt left over for the second layout group. These are distributed equally among the two children because each subview accepts the proposed size. Conclusion The code I used to generate the images in this article is available on GitHub. I only looked at HStacks here, but VStacks work in exactly the same way for the vertical dimension. SwiftUIâs layout algorithm always follows this basic pattern of proposed sizes and responses. Each of the built-in âprimitiveâ views (e.g. fixed and flexible frames, stacks, Text, Image, Spacer, shapes, padding, background, overlay) has a well-defined (if not always well-documented) layout behavior that can be expressed as a function (ProposedViewSize) -> CGSize. Youâll need to learn the behavior for view to work effectively with SwiftUI. A concrete lesson Iâm taking away from this analysis: HStack and VStack donât treat layout as an optimization problem that tries to find the optimal solution for a set of constraints (autolayout style). Rather, they sort their children in a particular way and then do a single proposal-and-response pass over them. If thereâs space leftover at the end, or if the available space isnât enough, then so be it.
Working with percentages in SwiftUI layout
SwiftUIâs layout primitives generally donât provide relative sizing options, e.g. âmake this view 50âŻ% of the width of its containerâ. Letâs build our own! Use case: chat bubbles Consider this chat conversation view as an example of what I want to build. The chat bubbles always remain 80âŻ% as wide as their container as the view is resized: The chat bubbles should become 80âŻ% as wide as their container. Download video Building a proportional sizing modifier 1. The Layout We can build our own relative sizing modifier on top of the Layout protocol. The layout multiplies its own proposed size (which it receives from its parent view) with the given factors for width and height. It then proposes this modified size to its only subview. Hereâs the implementation (the full code, including the demo app, is on GitHub): /// A custom layout that proposes a percentage of its /// received proposed size to its subview. /// /// - Precondition: must contain exactly one subview. fileprivate struct RelativeSizeLayout: Layout { var relativeWidth: Double var relativeHeight: Double func sizeThatFits( proposal: ProposedViewSize, subviews: Subviews, cache: inout () ) -> CGSize { assert(subviews.count == 1, "expects a single subview") let resizedProposal = ProposedViewSize( width: proposal.width.map { $0 * relativeWidth }, height: proposal.height.map { $0 * relativeHeight } ) return subviews[0].sizeThatFits(resizedProposal) } func placeSubviews( in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout () ) { assert(subviews.count == 1, "expects a single subview") let resizedProposal = ProposedViewSize( width: proposal.width.map { $0 * relativeWidth }, height: proposal.height.map { $0 * relativeHeight } ) subviews[0].place( at: CGPoint(x: bounds.midX, y: bounds.midY), anchor: .center, proposal: resizedProposal ) } } Notes: I made the type private because I want to control how it can be used. This is important for maintaining the assumption that the layout only ever has a single subview (which makes the math much simpler). Proposed sizes in SwiftUI can be nil or infinity in either dimension. Our layout passes these special values through unchanged (infinity times a percentage is still infinity). Iâll discuss below what implications this has for users of the layout. 2. The View extension Next, weâll add an extension on View that uses the layout we just wrote. This becomes our public API: extension View { /// Proposes a percentage of its received proposed size to `self`. public func relativeProposed(width: Double = 1, height: Double = 1) -> some View { RelativeSizeLayout(relativeWidth: width, relativeHeight: height) { // Wrap content view in a container to make sure the layout only // receives a single subview. Because views are lists! VStack { // alternatively: `_UnaryViewAdaptor(self)` self } } } } Notes: I decided to go with a verbose name, relativeProposed(width:height:), to make the semantics clear: weâre changing the proposed size for the subview, which wonât always result in a different actual size. More on this below. Weâre wrapping the subview (self in the code above) in a VStack. This might seem redundant, but itâs necessary to make sure the layout only receives a single element in its subviews collection. See Chris Eidhofâs SwiftUI Views are Lists for an explanation. Usage The layout code for a single chat bubble in the demo video above looks like this: let alignment: Alignment = message.sender == .me ? .trailing : .leading chatBubble .relativeProposed(width: 0.8) .frame(maxWidth: .infinity, alignment: alignment) The outermost flexible frame with maxWidth: .infinity is responsible for positioning the chat bubble with leading or trailing alignment, depending on whoâs speaking. You can even add another frame that limits the width to a maximum, say 400 points: let alignment: Alignment = message.sender == .me ? .trailing : .leading chatBubble .frame(maxWidth: 400) .relativeProposed(width: 0.8) .frame(maxWidth: .infinity, alignment: alignment) Here, our relative sizing modifier only has an effect as the bubbles become narrower than 400 points. In a wider window the width-limiting frame takes precedence. I like how composable this is! Download video 80âŻ% wonât always result in 80âŻ% If you watch the debugging guides Iâm drawing in the video above, youâll notice that the relative sizing modifier never reports a width greater than 400, even if the window is wide enough: The relative sizing modifier accepts the actual size of its subview as its own size. This is because our layout only adjusts the proposed size for its subview but then accepts the subviewâs actual size as its own. Since SwiftUI views always choose their own size (which the parent canât override), the subview is free to ignore our proposal. In this example, the layoutâs subview is the frame(maxWidth: 400) view, which sets its own width to the proposed width or 400, whichever is smaller. Understanding the modifierâs behavior Proposed size â actual size Itâs important to internalize that the modifier works on the basis of proposed sizes. This means it depends on the cooperation of its subview to achieve its goal: views that ignore their proposed size will be unaffected by our modifier. I donât find this particularly problematic because SwiftUIâs entire layout system works like this. Ultimately, SwiftUI views always determine their own size, so you canât write a modifier that âdoes the right thingâ (whatever that is) for an arbitrary subview hierarchy. nil and infinity I already mentioned another thing to be aware of: if the parent of the relative sizing modifier proposes nil or .infinity, the modifier will pass the proposal through unchanged. Again, I donât think this is particularly bad, but itâs something to be aware of. Proposing nil is SwiftUIâs way of telling a view to become its ideal size (fixedSize does this). Would you ever want to tell a view to become, say, 50âŻ% of its ideal width? Iâm not sure. Maybe itâd make sense for resizable images and similar views. By the way, you could modify the layout to do something like this: If the proposal is nil or infinity, forward it to the subview unchanged. Take the reported size of the subview as the new basis and apply the scaling factors to that size (this still breaks down if the child returns infinity). Now propose the scaled size to the subview. The subview might respond with a different actual size. Return this latest reported size as your own size. This process of sending multiple proposals to child views is called probing. Lots of built-in containers views do this too, e.g. VStack and HStack. Nesting in other container views The relative sizing modifier interacts in an interesting way with stack views and other containers that distribute the available space among their children. I thought this was such an interesting topic that I wrote a separate article about it: How the relative size modifier interacts with stack views. The code The complete code is available in a Gist on GitHub. Digression: Proportional sizing in early SwiftUI betas The very first SwiftUI betas in 2019 did include proportional sizing modifiers, but they were taken out before the final release. Chris Eidhof preserved a copy of SwiftUIâs âheader fileâ from that time that shows their API, including quite lengthy documentation. I donât know why these modifiers didnât survive the beta phase. The release notes from 2019 donât give a reason: The relativeWidth(_:), relativeHeight(_:), and relativeSize(width:height:) modifiers are deprecated. Use other modifiers like frame(minWidth:idealWidth:maxWidth:minHeight:idealHeight:maxHeight:alignment:) instead. (51494692) I also donât remember how these modifiers worked. They probably had somewhat similar semantics to my solution, but I canât be sure. The doc comments linked above sound straightforward (âSets the width of this view to the specified proportion of its parentâs width.â), but they donât mention the intricacies of the layout algorithm (proposals and responses) at all. containerRelativeFrame Update May 1, 2024: Apple introduced the containerRelativeFrame modifier for its 2023 OSes (iOS 17/macOS 14). If your deployment target permits it, this can be a good, built-in alternative. Note that containerRelativeFrame behaves differently than my relativeProposed modifier as it computes the size relative to the nearest container view, whereas my modifier uses its proposed size as the reference. The SwiftUI documentation somewhat vaguely lists the views that count as a container for containerRelativeFrame. Notably, stack views donât count! Check out Jordan Morganâs article Modifier Monday: .containerRelativeFrame(_ axes:) (2022-06-26) to learn more about containerRelativeFrame.
Keyboard shortcuts for Export Unmodified Original in Photos for Mac
Problem The Photos app on macOS doesnât provide a keyboard shortcut for the Export Unmodified Original command. macOS allows you to add your own app-specific keyboard shortcuts via System Settings > Keyboard > Keyboard Shortcuts > App Shortcuts. You need to enter the exact spelling of the menu item you want to invoke. Photos renames the command depending on whatâs selected: Export Unmodified Original For 1 Photoâ turns into â⊠Originals For 2 Videosâ turns into â⊠For 3 Itemsâ (for mixed selections), and so on. Argh! The System Settings UI for assigning keyboard shortcuts is extremely tedious to use if you want to add more than one or two shortcuts. Dynamically renaming menu commands is cute, but it becomes a problem when you want to assign keyboard shortcuts. Solution: shell script Hereâs a Bash script1 that assigns Ctrl + Opt + Cmd + E to Export Unmodified Originals for up to 20 selected items: #!/bin/bash # Assigns a keyboard shortcut to the Export Unmodified Originals # menu command in Photos.app on macOS. # @ = Command # ^ = Control # ~ = Option # $ = Shift shortcut='@~^e' # Set shortcut for 1 selected item echo "Setting shortcut for 1 item" defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Photo" "$shortcut" defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Video" "$shortcut" # Set shortcut for 2-20 selected items objects=(Photos Videos Items) for i in {2..20} do echo "Setting shortcut for $i items" for object in "${objects[@]}" do defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Originals For $i $object" "$shortcut" done done # Use this command to verify the result: # defaults read com.apple.Photos NSUserKeyEquivalents The script is also available on GitHub. Usage: Quit Photos.app. Run the script. Feel free to change the key combo or count higher than 20. Open Photos.app. Note: Thereâs a bug in Photos.app on macOS 13.2 (and at least some earlier versions). Custom keyboard shortcuts donât work until youâve opened the menu of the respective command at least once. So you must manually open the File > Export once before the shortcut will work. (For Apple folks: FB11967573.) I still write Bash scripts because Shellcheck doesnât support Zsh. â©ïž
Swift Evolution proposals in Alfred
I rarely participate actively in the Swift Evolution process, but I frequently refer to evolution proposals for my work, often multiple times per week. The proposals arenât always easy to read, but theyâre the most comprehensive (and sometimes only) documentation we have for many Swift features. For years, my tool of choice for searching Swift Evolution proposals has been Karoy Lorenteyâs swift-evolution workflow for Alfred. The workflow broke recently due to data format changes. Karoy was kind enough to add me as a maintainer so I could fix it. The new version 2.1.0 is now available on GitHub. Download the .alfredworkflow file and double-click to install. Besides the fix, the update has a few other improvements: The proposal title is now displayed more prominently. New actions to copy the proposal title (hold down Command) or copy it as a Markdown link (hold down Shift + Command). The script forwards the main metadata of the selected proposal (id, title, status, URL) to Alfred. If you want to extend the workflow with your own actions, you can refer to these variables.
Pattern matching on error codes
Foundation overloads the pattern matching operator ~= to enable matching against error codes in catch clauses. catch clauses in Swift support pattern matching, using the same patterns youâd use in a case clause inside a switch or in an if case ⊠statement. For example, to handle a file-not-found error you might write: import Foundation do { let fileURL = URL(filePath: "/abc") // non-existent file let data = try Data(contentsOf: fileURL) } catch let error as CocoaError where error.code == .fileReadNoSuchFile { print("File doesn't exist") } catch { print("Other error: \(error)") } This binds a value of type CocoaError to the variable error and then uses a where clause to check the specific error code. However, if you donât need access to the complete error instance, thereâs a shorter way to write this, matching directly against the error code: let data = try Data(contentsOf: fileURL) - } catch let error as CocoaError where error.code == .fileReadNoSuchFile { + } catch CocoaError.fileReadNoSuchFile { print("File doesn't exist") Foundation overloads ~= I was wondering why this shorter syntax works. Is there some special compiler magic for pattern matching against error codes of NSError instances? Turns out: no, the answer is much simpler. Foundation includes an overload for the pattern matching operator ~= that matches error values against error codes.1 The implementation looks something like this: public func ~= (code: CocoaError.Code, error: any Error) -> Bool { guard let error = error as? CocoaError else { return false } return error.code == code } The actual code in Foundation is a little more complex because it goes through a hidden protocol named _ErrorCodeProtocol, but thatâs not important. You can check out the code in the Foundation repository: Darwin version, swift-corelibs-foundation version. This matching on error codes is available for CocoaError, URLError, POSIXError, and MachError (and possibly more types in other Apple frameworks, I havenât checked). I wrote about the ~= operator before, way back in 2015(!): Pattern matching in Swift and More pattern matching examples. â©ïž
You should watch Double Fine Adventure
I know Iâm almost a decade late to this party, but Iâm probably not the only one, so here goes. Double Fine Adventure was a wildly successful 2012 Kickstarter project to crowdfund the development of a point-and-click adventure game and, crucially, to document its development on video. The resulting game Broken Age was eventually released in two parts in 2014 and 2015. Broken Age is a beautiful game and I recommend you try it. Itâs available for lots of platforms and is pretty cheap (10â15 euros/dollars or less). I played it on the Nintendo Switch, which worked very well. Broken Age. But the real gem to me was watching the 12.5-hour documentary on YouTube. A video production team followed the entire three-year development process from start to finish. It provides a refreshingly candid and transparent insight into âhow the sausage is madeâ, including sensitive topics such as financial problems, layoffs, and long work hours. Throughout all the ups and downs thereâs a wonderful sense of fun and camaraderie among the team at Double Fine, which made watching the documentary even more enjoyable to me than playing Broken Age. You can tell these people love working with each other. I highly recommend taking a look if you find this mildly interesting. The Double Fine Adventure documentary. The first major game spoilers donât come until episode 15, so you can safely watch most of the documentary before playing the game (and this is how the original Kickstarter backers experienced it). However, I think itâs even more interesting to play the game first, or to experience both side-by-side. My suggestion: watch two or three episodes of the documentary. If you like it, start playing Broken Age alongside it.
Understanding SwiftUI view lifecycles
I wrote an app called SwiftUI View Lifecycle. The app allows you to observe how different SwiftUI constructs and containers affect a viewâs lifecycle, including the lifetime of its state and when onAppear gets called. The code for the app is on GitHub. It can be built for iOS and macOS. The view tree and the render tree When we write SwiftUI code, we construct a view tree that consists of nested view values. Instances of the view tree are ephemeral: SwiftUI constantly destroys and recreates (parts of) the view tree as it processes state changes. The view tree serves as a blueprint from which SwiftUI creates a second tree, which represents the actual view âobjectsâ that are âon screenâ at any given time (the âobjectsâ could be actual UIView or NSView objects, but also other representations; the exact meaning of âon screenâ can vary depending on context). Chris Eidhof likes to call this second tree the render tree (the link points to a 3 minute video where Chris demonstrates this duality, highly recommended). The render tree persists across state changes and is used by SwiftUI to establish view identity. When a state change causes a change in a viewâs value, SwiftUI will find the corresponding view object in the render tree and update it in place, rather than recreating a new view object from scratch. This is of course key to making SwiftUI efficient, but the render tree has another important function: it controls the lifetimes of views and their state. View lifecycles and state We can define a viewâs lifetime as the timespan it exists in the render tree. The lifetime begins with the insertion into the render tree and ends with the removal. Importantly, the lifetime extends to view state defined with @State and @StateObject: when a view gets removed from the render tree, its state is lost; when the view gets inserted again later, the state will be recreated with its initial value. The SwiftUI View Lifecycle app tracks three lifecycle events for a view and displays them as timestamps: @State = when the viewâs state was created (equivalent to the start of the viewâs lifetime) onAppear = when onAppear was last called onDisappear = when onDisappear was last called The lifecycle monitor view displays the timestamps when certain lifecycle events last occurred. The app allows you to observe these events in different contexts. As you click your way through the examples, youâll notice that the timing of these events changes depending on the context a view is embedded in. For example: An if/else statement creates and destroys its child views every time the condition changes; state is not preserved. A ScrollView eagerly inserts all of its children into the render tree, regardless of whether theyâre inside the viewport or not. All children appear right away and never disappear. A List with dynamic content (using ForEach) lazily inserts only the child views that are currently visible. But once a child viewâs lifetime has started, the list will keep its state alive even when it gets scrolled offscreen again. onAppear and onDisappear get called repeatedly as views are scrolled into and out of the viewport. A NavigationStack calls onAppear and onDisappear as views are pushed and popped. State for parent levels in the stack is preserved when a child view is pushed. A TabView starts the lifetime of all child views right away, even the non-visible tabs. onAppear and onDisappear get called repeatedly as the user switches tabs, but the tab view keeps the state alive for all tabs. Lessons Here are a few lessons to take away from this: Different container views may have different performance and memory usage behaviors, depending on how long they keep child views alive. onAppear isnât necessarily called when the state is created. It can happen later (but never earlier). onAppear can be called multiple times in some container views. If you need a side effect to happen exactly once in a viewâs lifetime, consider writing yourself an onFirstAppear helper, as shown by Ian Keen and Jordan Morgan in Running Code Only Once in SwiftUI (2022-11-01). Iâm sure youâll find more interesting tidbits when you play with the app. Feedback is welcome!
clipped() doesnât affect hit testing
The clipped() modifier in SwiftUI clips a view to its bounds, hiding any out-of-bounds content. But note that clipping doesnât affect hit testing; the clipped view can still receive taps/clicks outside the visible area. I tested this on iOS 16.1 and macOS 13.0. Example Hereâs a 300Ă300 square, which we then constrain to a 100Ă100 frame. I also added a border around the outer frame to visualize the views: Rectangle() .fill(.orange.gradient) .frame(width: 300, height: 300) // Set view to 100Ă100 â renders out of bounds .frame(width: 100, height: 100) .border(.blue) SwiftUI views donât clip their content by default, hence the full 300Ă300 square remains visible. Notice the blue border that indicates the 100Ă100 outer frame: Now letâs add .clipped() to clip the large square to the 100Ă100 frame. I also made the square tappable and added a button: VStack { Button("You can't tap me!") { buttonTapCount += 1 } .buttonStyle(.borderedProminent) Rectangle() .fill(.orange.gradient) .frame(width: 300, height: 300) .frame(width: 100, height: 100) .clipped() .onTapGesture { rectTapCount += 1 } } When you run this code, youâll discover that the button isnât tappable at all. This is because the (unclipped) square, despite not being fully visible, obscures the button and âstealsâ all taps. The dashed outline indicates the hit area of the orange square. The button isnât tappable because itâs covered by the clipped view with respect to hit testing. The fix: .contentShape() The contentShape(_:) modifier defines the hit testing area for a view. By adding .contentShape(Rectangle()) to the 100Ă100 frame, we limit hit testing to that area, making the button tappable again: Rectangle() .fill(.orange.gradient) .frame(width: 300, height: 300) .frame(width: 100, height: 100) .contentShape(Rectangle()) .clipped() Note that the order of .contentShape(Rectangle()) and .clipped() could be swapped. The important thing is that contentShape is an (indirect) parent of the 100Ă100 frame modifier that defines the size of the hit testing area. Video demo I made a short video that demonstrates the effect: Initially, taps on the button, or even on the surrounding whitespace, register as taps on the square. The top switch toggles display of the square before clipping. This illustrates its hit testing area. The second switch adds .contentShape(Rectangle()) to limit hit testing to the visible area. Now tapping the button increments the buttonâs tap count. The full code for this demo is available on GitHub. Download video Summary The clipped() modifier doesnât affect the clipped viewâs hit testing region. The same is true for clipShape(_:). Itâs often a good idea to combine these modifiers with .contentShape(Rectangle()) to bring the hit testing logic in sync with the UI.
When .animation animates more (or less) than itâs supposed to
On the positioning of the .animation modifier in the view tree, or: âRenderingâ vs. ânon-renderingâ view modifiers The documentation for SwiftUIâs animation modifier says: Applies the given animation to this view when the specified value changes. This sounds unambiguous to me: it sets the animation for âthis viewâ, i.e. the part of the view tree that .animation is being applied to. This should give us complete control over which modifiers we want to animate, right? Unfortunately, itâs not that simple: itâs easy to run into situations where a view change inside an animated subtree doesnât get animated, or vice versa. Unsurprising examples Let me give you some examples, starting with those that do work as documented. I tested all examples on iOS 16.1 and macOS 13.0. 1. Sibling views can have different animations Independent subtrees of the view tree can be animated independently. In this example we have three sibling views, two of which are animated with different durations, and one that isnât animated at all: struct Example1: View { var flag: Bool var body: some View { HStack(spacing: 40) { Rectangle() .frame(width: 80, height: 80) .foregroundColor(.green) .scaleEffect(flag ? 1 : 1.5) .animation(.easeOut(duration: 0.5), value: flag) Rectangle() .frame(width: 80, height: 80) .foregroundColor(flag ? .yellow : .red) .rotationEffect(flag ? .zero : .degrees(45)) .animation(.easeOut(duration: 2.0), value: flag) Rectangle() .frame(width: 80, height: 80) .foregroundColor(flag ? .pink : .mint) } } } The two animation modifiers each apply to their own subtree. They donât interfere with each other and have no effect on the rest of the view hierarchy: Download video 2. Nested animation modifiers When two animation modifiers are nested in a single view tree such that one is an indirect parent of the other, the inner modifier can override the outer animation for its subviews. The outer animation applies to view modifiers that are placed between the two animation modifiers. In this example we have one rectangle view with animated scale and rotation effects. The outer animation applies to the entire subtree, including both effects. The inner animation modifier overrides the outer animation only for whatâs nested below it in the view tree, i.e. the scale effect: struct Example2: View { var flag: Bool var body: some View { Rectangle() .frame(width: 80, height: 80) .foregroundColor(.green) .scaleEffect(flag ? 1 : 1.5) .animation(.default, value: flag) // inner .rotationEffect(flag ? .zero : .degrees(45)) .animation(.default.speed(0.3), value: flag) // outer } } As a result, the scale and rotation changes animate at different speeds: Download video Note that we can also pass .animation(nil, value: flag) to selectively disable animations for a subtree, overriding a non-nil animation further up the view tree. 3. animation only animates its children (with exceptions) As a general rule, the animation modifier only applies to its subviews. In other words, views and modifiers that are direct or indirect parents of an animation modifier should not be animated. As weâll see below, it doesnât always work like that, but hereâs an example where it does. This is a slight variation of the previous code snippet where I removed the outer animation modifier (and changed the color for good measure): struct Example3: View { var flag: Bool var body: some View { Rectangle() .frame(width: 80, height: 80) .foregroundColor(.orange) .scaleEffect(flag ? 1 : 1.5) .animation(.default, value: flag) // Don't animate the rotation .rotationEffect(flag ? .zero : .degrees(45)) } } Recall that the order in which view modifiers are written in code is inverted with respect to the actual view tree hierarchy. Each view modifier is a new view that wraps the view itâs being applied to. So in our example, the scale effect is the child of the animation modifier, whereas the rotation effect is its parent. Accordingly, only the scale change gets animated: Download video Surprising examples Now itâs time for the âfunâ part. It turns out not all view modifiers behave as intuitively as scaleEffect and rotationEffect when combined with the animation modifier. 4. Some modifiers donât respect the rules In this example weâre changing the color, size, and alignment of the rectangle. Only the size change should be animated, which is why weâve placed the alignment and color mutations outside the animation modifier: struct Example4: View { var flag: Bool var body: some View { let size: CGFloat = flag ? 80 : 120 Rectangle() .frame(width: size, height: size) .animation(.default, value: flag) .frame(maxWidth: .infinity, alignment: flag ? .leading : .trailing) .foregroundColor(flag ? .pink : .indigo) } } Unfortunately, this doesnât work as intended, as all three changes are animated: Download video It behaves as if the animation modifier were the outermost element of this view subtree. 5. padding and border This oneâs sort of the inverse of the previous example because a change we want to animate doesnât get animated. The padding is a child of the animation modifier, so Iâd expect changes to it to be animated, i.e. the border should grow and shrink smoothly: struct Example5: View { var flag: Bool var body: some View { Rectangle() .frame(width: 80, height: 80) .padding(flag ? 20 : 40) .animation(.default, value: flag) .border(.primary) .foregroundColor(.cyan) } } But thatâs not what happens: Download video 6. Font modifiers Font modifiers also behave seemingly erratic with respect to the animation modifier. In this example, we want to animate the font width, but not the size or weight (smooth text animation is a new feature in iOS 16): struct Example6: View { var flag: Bool var body: some View { Text("Hello!") .fontWidth(flag ? .condensed : .expanded) .animation(.default, value: flag) .font(.system( size: flag ? 40 : 60, weight: flag ? .regular : .heavy) ) } } You guessed it, this doesnât work as intended. Instead, all text properties animate smoothly: Download video Why does it work like this? In summary, the placement of the animation modifier in the view tree allows some control over which changes get animated, but it isnât perfect. Some modifiers, such as scaleEffect and rotationEffect, behave as expected, whereas others (frame, padding, foregroundColor, font) are less controllable. I donât fully understand the rules, but the important factor seems to be if a view modifier actually ârendersâ something or not. For instance, foregroundColor just writes a color into the environment; the modifier itself doesnât draw anything. I suppose this is why its position with respect to animation is irrelevant: RoundedRectangle(cornerRadius: flag ? 0 : 40) .animation(.default, value: flag) // Color change still animates, even though weâre outside .animation .foregroundColor(flag ? .pink : .indigo) The rendering presumably takes place on the level of the RoundedRectangle, which reads the color from the environment. At this point the animation modifier is active, so SwiftUI will animate all changes that affect how the rectangle is rendered, regardless of where in the view tree theyâre coming from. The same explanation makes intuitive sense for the font modifiers in example 6. The actual rendering, and therefore the animation, occurs on the level of the Text view. The various font modifiers affect how the text is drawn, but they donât render anything themselves. Similarly, padding and frame (including the frameâs alignment) are ânon-renderingâ modifiers too. They donât use the environment, but they influence the layout algorithm, which ultimately affects the size and position of one or more ârenderingâ views, such as the rectangle in example 4. That rectangle sees a combined change in its geometry, but it canât tell where the change came from, so itâll animate the full geometry change. In example 5, the ârenderingâ view thatâs affected by the padding change is the border (which is implemented as a stroked rectangle in an overlay). Since the border is a parent of the animation modifier, its geometry change is not animated. In contrast to frame and padding, scaleEffect and rotationEffect are ârenderingâ modifiers. They apparently perform the animations themselves. Conclusion SwiftUI views and view modifiers can be divided into ârenderingâ and ânon-renderingâ groups (I wish I had better terms for these). In iOS 16/macOS 13, the placement of the animation modifier with respect to non-rendering modifiers is irrelevant for deciding if a change gets animated or not. Non-rendering modifiers include (non-exhaustive list): Layout modifiers (frame, padding, position, offset) Font modifiers (font, bold, italic, fontWeight, fontWidth) Other modifiers that write data into the environment, e.g. foregroundColor, foregroundStyle, symbolRenderingMode, symbolVariant Rendering modifiers include (non-exhaustive list): clipShape, cornerRadius Geometry effects, e.g. scaleEffect, rotationEffect, projectionEffect Graphical effects, e.g. blur, brightness, hueRotation, opacity, saturation, shadow
Xcode 14.0 generates wrong concurrency code for macOS targets
Mac apps built with Xcode 14.0 and 14.0.1 may contain concurrency bugs because the Swift 5.7 compiler can generate invalid code when targeting the macOS 12.3 SDK. If you distribute Mac apps, you should build them with Xcode 13.4.1 until Xcode 14.1 is released. Hereâs what happened: Swift 5.7 implements SE-0338: Clarify the Execution of Non-Actor-Isolated Async Functions, which introduces new rules how async functions hop between executors. Because of SE-0338, when compiling concurrency code, the Swift 5.7 compiler places executor hops in different places than Swift 5.6. Some standard library functions need to opt out of the new rules. They are annotated with a new, unofficial attribute @_unsafeInheritExecutor, which was introduced for this purpose. When the Swift 5.7 compiler sees this attribute, it generates different executor hops. The attribute is only present in the Swift 5.7 standard library, i.e. in the iOS 16 and macOS 13 SDKs. This is fine for iOS because compiler version and the SDKâs standard library version match in Xcode 14.0. But for macOS targets, Xcode 14.0 uses the Swift 5.7 compiler with the standard library from Swift 5.6, which doesnât contain the @_unsafeInheritExecutor attribute. This is what causes the bugs. Note that the issue is caused purely by the version mismatch at compile-time. The standard library version used by the compiled app at run-time (which depends on the OS version the app runs on) isnât relevant. As soon as Xcode 14.1 gets released with the macOS 13 SDK, the version mismatch will go away, and Mac targets built with Xcode 14.1 wonât exhibit these bugs. Third-party developers had little chance of discovering the bug during the Xcode 14.0 beta phase because the betas ship with the new beta macOS SDK. The version mismatch occurs when the final Xcode release in September reverts back to the old macOS SDK to accommodate the different release schedules of iOS and macOS. Sources Breaking concurrency invariants is a serious issue, though Iâm not sure how much of a problem this is in actual production apps. Here are all related bug reports that I know of: Concurrency is broken in Xcode 14 for macOS (2022-09-14) withUnsafeContinuation can break actor isolation (2022-10-07) And explanations of the cause from John McCall of the Swift team at Apple: John McCall (2022-10-07): This guarantee is unfortunately broken with Xcode 14 when compiling for macOS because itâs shipping with an old macOS SDK that doesnât declare that withUnsafeContinuation inherits its callerâs execution context. And yes, there is a related actor-isolation issue because of this bug. That will be fixed by the release of the new macOS SDK. John McCall (2022-10-07): Now, there is a bug in Xcode 14 when compiling for the macOS SDK because it ships with an old SDK. That bug doesnât actually break any of the ordering properties above. It does, however, break Swiftâs data isolation guarantees because it causes withUnsafeContinuation, when called from an actor-isolated context, to send a non-Sendable function to a non-isolated executor and then call it, which is completely against the rules. And in fact, if you turn strict sendability checking on when compiling against that SDK, you will get a diagnostic about calling withUnsafeContinuation because it thinks that youâre violating the rules (because withUnsafeContinuation doesnât properly inherit the execution context of its caller). Poor communication from Apple What bugs me most about the situation is Appleâs poor communication. When the official, current release of your programming language ships with a broken compiler for one of your most important platforms, the least Iâd expect is a big red warning at the top of the release notes. I canât find any mention of this issue in the Xcode 14.0 release notes or Xcode 14.0.1 release notes, however. Even better: the warning should be displayed prominently in Xcode, or Xcode 14.0 should outright refuse to build Mac apps. Iâm sure the latter option isnât practical for all sorts of reasons, although it sounds logical to me: if the only safe compiler/SDK combinations are either 5.6 with the macOS 12 SDK or 5.7 with the macOS 13 SDK, there shouldnât be an official Xcode version that combines the 5.7 compiler with the macOS 12 SDK.
Where View.task gets its main-actor isolation from
SwiftUIâs .task modifier inherits its actor context from the surrounding function. If you call .task inside a viewâs body property, the async operation will run on the main actor because View.body is (semi-secretly) annotated with @MainActor. However, if you call .task from a helper property or function that isnât @MainActor-annotated, the async operation will run in the cooperative thread pool. Example Hereâs an example. Notice the two .task modifiers in body and helperView. The code is identical in both, yet only one of them compiles â in helperView, the call to a main-actor-isolated function fails because weâre not on the main actor in that context: We can call a main-actor-isolated function from inside body, but not from a helper property. import SwiftUI @MainActor func onMainActor() { print("on MainActor") } struct ContentView: View { var body: some View { VStack { helperView Text("in body") .task { // We can call a @MainActor func without await onMainActor() } } } var helperView: some View { Text("in helperView") .task { // âïž Error: Expression is 'async' but is not marked with 'await' onMainActor() } } } Why does it work like this? This behavior is caused by two (semi-)hidden annotations in the SwiftUI framework: The View protocol annotates its body property with @MainActor. This transfers to all conforming types. View.task annotates its action parameter with @_inheritActorContext, causing it to adopt the actor context from its use site. Sadly, none of these annotations are visible in the SwiftUI documentation, making it very difficult to understand whatâs going on. The @MainActor annotation on View.body is present in Xcodeâs generated Swift interface for SwiftUI (Jump to Definition of View), but that feature doesnât work reliably for me, and as weâll see, it doesnât show the whole truth, either. View.body is annotated with @MainActor in Xcodeâs generated interface for SwiftUI. SwiftUIâs module interface To really see the declarations the compiler sees, we need to look at SwiftUIâs module interface file. A module interface is like a header file for Swift modules. It lists the moduleâs public declarations and even the implementations of inlinable functions. Module interfaces use normal Swift syntax and have the .swiftinterface file extension. SwiftUIâs module interface is located at: [Path to Xcode.app]/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/SwiftUI.framework/Modules/SwiftUI.swiftmodule/arm64e-apple-ios.swiftinterface (There can be multiple .swiftinterface files in that directory, one per CPU architecture. Pick any one of them. Pro tip for viewing the file in Xcode: Editor > Syntax Coloring > Swift enables syntax highlighting.) Inside, youâll find that View.body has the @MainActor(unsafe) attribute: @available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *) @_typeEraser(AnyView) public protocol View { // ⊠@SwiftUI.ViewBuilder @_Concurrency.MainActor(unsafe) var body: Self.Body { get } } And youâll find this declaration for .task, including the @_inheritActorContext attribute: @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *) extension SwiftUI.View { #if compiler(>=5.3) && $AsyncAwait && $Sendable && $InheritActorContext @inlinable public func task( priority: _Concurrency.TaskPriority = .userInitiated, @_inheritActorContext _ action: @escaping @Sendable () async -> Swift.Void ) -> some SwiftUI.View { modifier(_TaskModifier(priority: priority, action: action)) } #endif // ⊠} SwiftUIâs module interface file shows the @_inheritActorContext annotatation on View.task. Putting it all together Armed with this knowledge, everything makes more sense: When used inside body, task inherits the @MainActor context from body. When used outside of body, there is no implicit @MainActor annotation, so task will run its operation on the cooperative thread pool by default. Unless the view contains an @ObservedObject or @StateObject property, which makes the entire view @MainActor via this obscure rule for property wrappers whose wrappedValue property is bound to a global actor: A struct or class containing a wrapped instance property with a global actor-qualified wrappedValue infers actor isolation from that property wrapper Update May 1, 2024: SE-0401: Remove Actor Isolation Inference caused by Property Wrappers removes the above rule when compiling in Swift 6 language mode. This is a good change because it makes reasoning about actor isolation simpler. In the Swift 5 language mode, you can opt into the better behavior with the -enable-upcoming-feature DisableOutwardActorInference compiler flags. I recommend you do. The lesson: if you use helper properties or functions in your view, consider annotating them with @MainActor to get the same semantics as body. By the way, note that the actor context only applies to code that is placed directly inside the async closure, as well as to synchronous functions the closure calls. Async functions choose their own execution context, so any call to an async function can switch to a different executor. For example, if you call URLSession.data(from:) inside a main-actor-annotated function, the runtime will hop to the global cooperative executor to execute that method. See SE-0338: Clarify the Execution of Non-Actor-Isolated Async Functions for the precise rules. On Appleâs policy to hide annotations in documentation I understand Appleâs impetus not to show unofficial API or language features in the documentation lest developers get the preposterous idea to use these features in their own code! But it makes understanding so much harder. Before I saw the annotations in the .swiftinterface file, the behavior of the code at the beginning of this article never made sense to me. Hiding the details makes things seem like magic when they actually arenât. And thatâs not good, either.
Experimenting with Live Activities
iOS 16 beta 4 is the first SDK release that supports Live Activities. A Live Activity is a widget-like view an app can place on your lock screen and update in real time. Examples where this can be useful include live sports scores or train departure times. These are my notes on playing with the API and implementing my first Live Activity. A bike computer on your lock screen My Live Activity is a display for a bike computer that Iâve been developing with a group a friends. Hereâs a video of it in action: Download video And here with simulated data: Download video I havenât talked much about our bike computer project publicly yet; that will hopefully change someday. In short, a group of friends and I designed a little box that connects to your bikeâs hub dynamo, measures speed and distance, and sends the data via Bluetooth to an iOS app. The app records all your rides and can also act as a live speedometer when mounted on your bikeâs handlebar. Itâs this last feature that I wanted to replicate in the Live Activity. Follow Appleâs guide Adding a Live Activity to the app wasnât hard. I found Appleâs guide Displaying live data on the Lock Screen with Live Activities easy to follow and quite comprehensive. No explicit user approval iOS doesnât ask the user for approval when an app wants to show a Live Activity. I found this odd since it seems to invite developers to abuse the feature, but maybe itâs OK because of the foreground requirement (see below). Plus, users can disallow Live Activities on a per-app basis in Settings. Users can dismiss an active Live Activity from the lock screen by swiping (like a notification). Most apps will probably need to ask the user for notification permissions to update their Live Activities. The app must be in the foreground to start an activity To start a Live Activity, an app must be open in the foreground. This isnât ideal for the bike computer because the speedometer canât appear magically on the lock screen when the user starts riding (even though iOS wakes up the app in the background at this point to deliver the Bluetooth events from the bike). The user has to open the app manually at least once. On the other hand, this limitation may not be an issue for most use cases and will probably cut down on spamming/abuse significantly. The app must keep running in the background to update the activity (or use push notifications) As long as the app keeps running (in the foreground or background), it can update the Live Activity as often as it wants (I think). This is ideal for the bike computer as the app keeps running in the background processing Bluetooth events while the bike is in motion. I assume the same applies to other apps that can remain alive in the background, such as audio players or navigation apps doing continuous location monitoring. Updating the Live Activity once per second was no problem in my testing, and I didnât experience any rate limiting. Most apps get suspended in the background, however. They must use push notifications to update their Live Activity (or background tasks or some other mechanism to have the system wake you up). Apple introduced a new kind of push notification that is delivered directly to the Live Activity, bypassing the app altogether. I havenât played with push notification updates, so I donât know the benefits of using this method over sending a silent push notification to wake the app and updating the Live Activity from there. Probably less aggressive rate limiting? Lock screen color matching I havenât found a good way to match my Live Activityâs colors to the current system colors on the lock screen. By default, text in a Live Activity is black in light mode, whereas the built-in lock screen themes seem to favor white or other light text colors. If there is an API or environment value that allows apps to match the color style of the current lock screen, I havenât found it. I experimented with various foreground styles, such as materials, without success. I ended up hardcoding the foreground color, but Iâm not satisfied with the result. Depending on the userâs lock screen theme, the Live Activity can look out of place. The default text color of a Live Activity in light mode is black. This doesnât match most lock screen themes. Animations canât be disabled Appleâs guide clearly states that developers have little control over animations in a Live Activity: Animate content updates When you define the user interface of your Live Activity, the system ignores any animation modifiers â for example, withAnimation(_:_:) and animation(_:value:) â and uses the systemâs animation timing instead. However, the system performs some animation when the dynamic content of the Live Activity changes. Text views animate content changes with blurred content transitions, and the system animates content transitions for images and SF Symbols. If you add or remove views from the user interface based on content or state changes, views fade in and out. Use the following view transitions to configure these built-in transitions: opacity, move(edge:), slide, push(from:), or combinations of them. Additionally, request animations for timer text with numericText(countsDown:). It makes total sense to me that Apple doesnât want developers to go crazy with animations on the lock screen, and perhaps having full control over animations also makes it easier for Apple to integrate Live Activities into the always-on display thatâs probably coming on the next iPhone. What surprised me is that I couldnât find a way to disable the text change animations altogether. I find the blurred text transitions for the large speed value quite distracting and I think this label would look better without any animations. But no combination of .animation(nil), .contentTransition(.identity), and .transition(.identity) would do this. Sharing code between app and widget A Live Activity is very much like a widget: the UI must live in your appâs widget extension. You start the Live Activity with code that runs in your app, though. Both targets (the app and the widget extension) need access to a common data type that represents the data the widget displays. You should have a third target (a framework or SwiftPM package) that contains such shared types and APIs and that the downstream targets import. Availability annotations Update September 22, 2022: This limitation no longer applies. The iOS 16.1 SDK added the ability to have availability conditions in WidgetBundle. Source: Tweet from Luca Bernardi (2022-09-20). WidgetBundle apparently doesnât support widgets with different minimum deployment targets. If your widget extension has a deployment target of iOS 14 or 15 for an existing widget and you now want to add a Live Activity, Iâd expect your widget bundle to look like this: @main struct MyWidgets: WidgetBundle { var body: some Widget { MyNormalWidget() // Error: Closure containing control flow statement cannot // be used with result builder 'WidgetBundleBuilder' if #available(iOSApplicationExtension 16.0, *) { MyLiveActivityWidget() } } } But this doesnât compile because the result builder type used by WidgetBundle doesnât support availability conditions. I hope Apple fixes this. This wasnât a problem for me because our app didnât have any widgets until now, so I just set the deployment target of the widget extension to iOS 16.0. If you have existing widgets and canât require iOS 16 yet, a workaround is to add a second widget extension target just for the Live Activity. I havenât tried this, but WidgetKit explicitly supports having multiple widget extensions, so it should work: Typically, you include all your widgets in a single widget extension, although your app can contain multiple extensions.
How @MainActor works
@MainActor is a Swift annotation to coerce a function to always run on the main thread and to enable the compiler to verify this. How does this work? In this article, Iâm going to reimplement @MainActor in a slightly simplified form for illustration purposes, mainly to show how little âmagicâ there is to it. The code of the real implementation in the Swift standard library is available in the Swift repository. @MainActor relies on two Swift features, one of them unofficial: global actors and custom executors. Global actors MainActor is a global actor. That is, it provides a single actor instance that is shared between all places in the code that are annotated with @MainActor. All global actors must implement the shared property thatâs defined in the GlobalActor protocol (every global actor implicitly conforms to this protocol): @globalActor final actor MyMainActor { // Requirements from the implicit GlobalActor conformance typealias ActorType = MyMainActor static var shared: ActorType = MyMainActor() // Donât allow others to create instances private init() {} } At this point, we have a global actor that has the same semantics as any other actor. That is, functions annotated with @MyMainActor will run on a thread in the cooperative thread pool managed by the Swift runtime. To move the work to the main thread, we need another concept, custom executors. Executors A bit of terminology: The compiler splits async code into jobs. A job roughly corresponds to the code from one await (= potential suspension point) to the next. The runtime submits each job to an executor. The executor is the object that decides in which order and in which context (i.e. which thread or dispatch queue) to run the jobs. Swift ships with two built-in executors: the default concurrent executor, used for ânormalâ, non-actor-isolated async functions, and a default serial executor. Every actor instance has its own instance of this default serial executor and runs its code on it. Since the serial executor, like a serial dispatch queue, only runs a single job at a time, this prevents concurrent accesses to the actorâs state. Custom executors As of Swift 5.6, executors are an implementation detail of Swiftâs concurrency system, but itâs almost certain that they will become an official feature fairly soon. Why? Because it can sometimes be useful to have more control over the execution context of async code. Some examples are listed in a draft proposal for allowing developers to implement custom executors that was first pitched in February 2021 but then didnât make the cut for Swift 5.5. @MainActor already uses the unofficial ability for an actor to provide a custom executor, and weâre going to do the same for our reimplementation. A serial executor that runs its job on the main dispatch queue is implemented as follows. The interesting bit is the enqueue method, where we tell the job to run on the main dispatch queue: final class MainExecutor: SerialExecutor { func asUnownedSerialExecutor() -> UnownedSerialExecutor { UnownedSerialExecutor(ordinary: self) } func enqueue(_ job: UnownedJob) { DispatchQueue.main.async { job._runSynchronously(on: self.asUnownedSerialExecutor()) } } } Weâre responsible for keeping an instance of the executor alive, so letâs store it in a global: private let mainExecutor = MainExecutor() Finally, we need to tell our global actor to use the new executor: import Dispatch @globalActor final actor MyMainActor { // ... // Requirement from the implicit GlobalActor conformance static var sharedUnownedExecutor: UnownedSerialExecutor { mainExecutor.asUnownedSerialExecutor() } // Requirement from the implicit Actor conformance nonisolated var unownedExecutor: UnownedSerialExecutor { mainExecutor.asUnownedSerialExecutor() } } Thatâs all there is to reimplement the basics of @MainActor. Conclusion The full code is on GitHub, including a usage example to demonstrate that the @MyMainActor annotations work. John McCallâs draft proposal for custom executors is worth reading, particularly the philosophy section. Itâs an easy-to-read summary of some of the design principles behind Swiftâs concurrency system: Swiftâs concurrency design sees system threads as expensive and rather precious resources. ⊠It is therefore best if the system allocates a small number of threads â just enough to saturate the available cores â and for those threads [to] only block for extended periods when there is no pending work in the program. Individual functions cannot effectively make this decision about blocking, because they lack a holistic understanding of the state of the program. Instead, the decision must be made by a centralized system which manages most of the execution resources in the program. This basic philosophy of how best to use system threads drives some of the most basic aspects of Swiftâs concurrency design. In particular, the main reason to add async functions is to make it far easier to write functions that, unlike standard functions, will reliably abandon a thread when they need to wait for something to complete. And: The default concurrent executor is used to run jobs that donât need to run somewhere more specific. It is based on a fixed-width thread pool that scales to the number of available cores. Programmers therefore do not need to worry that creating too many jobs at once will cause a thread explosion that will starve the program of resources.
AttributedStringâs Codable format and what it has to do with Unicode
Hereâs a simple AttributedString with some formatting: import Foundation let str = try! AttributedString( markdown: "CafĂ© **Sol**", options: .init(interpretedSyntax: .inlineOnly) ) AttributedString is Codable. If your task was to design the encoding format for an attributed string, what would you come up with? Something like this seems reasonable (in JSON with comments): { "text": "CafĂ© Sol", "runs": [ { // start..<end in Character offsets "range": [5, 8], "attrs": { "strong": true } } ] } This stores the text alongside an array of runs of formatting attributes. Each run consists of a character range and an attribute dictionary. Unicode is complicated But this format is bad and can break in various ways. The problem is that the character offsets that define the runs arenât guaranteed to be stable. The definition of what constitutes a Character, i.e. a user-perceived character, or a Unicode grapheme cluster, can and does change in new Unicode versions. If we decoded an attributed string that had been serialized on a different OS version (before Swift 5.6, Swift used the OSâs Unicode library for determining character boundaries), or by code compiled with a different Swift version (since Swift 5.6, Swift uses its own grapheme breaking algorithm that will be updated alongside the Unicode standard)1, the character ranges might no longer represent the original intent, or even become invalid. Update April 11, 2024: See this Swift forum post I wrote for an example where the Unicode rules for grapheme cluster segmentation changed for flag emoji. This change caused a corresponding change in how Swift counts the Characters in a string containing consecutive flags, such as "đŠđ·đŻđ”". Normalization forms So letâs use UTF-8 byte offsets for the ranges, I hear you say. This avoids the first issue but still isnât safe, because some characters, such as the Ă© in the example string, have more than one representation in Unicode: it can be either the standalone character Ă© (Latin small letter e with acute) or the combination of e + âÌ (Combining acute accent). The Unicode standard calls these variants normalization forms.2 The first form needs 2 bytes in UTF-8, whereas the second uses 3 bytes, so subsequent ranges would be off by one if the string and the ranges used different normalization forms. Now in theory, the string itself and the ranges should use the same normalization form upon serialization, avoiding the problem. But this is almost impossible to guarantee if the serialized data passes through other systems that may (inadvertently or not) change the Unicode normalization of the strings that pass through them. A safer option would be to store the text not as a string but as a blob of UTF-8 bytes, because serialization/networking/storage layers generally donât mess with binary data. But even then youâd have to be careful in the encoding and decoding code to apply the formatting attributes before any normalization takes place. Depending on how your programming language handles Unicode, this may not be so easy. Foundationâs solution The people on the Foundation team know all this, of course, and chose a better encoding format for Attributed String. Letâs take a look.3 let encoder = JSONEncoder() encoder.outputFormatting = [.prettyPrinted, .sortedKeys] let jsonData = try encoder.encode(str) let json = String(decoding: jsonData, as: UTF8.self) This is how our sample string is encoded: [ "CafĂ© ", { }, "Sol", { "NSInlinePresentationIntent" : 2 } ] This is an array of runs, where each run consists of a text segment and a dictionary of formatting attributes. The important point is that the formatting attributes are directly associated with the text segments they belong to, not indirectly via brittle byte or character offsets. (This encoding format is also more space-efficient and possibly better represents the in-memory layout of AttributedString, but thatâs beside the point for this discussion.) Thereâs still a (smaller) potential problem here if the character boundary rules change for code points that span two adjacent text segments: the last character of run N and the first character of run N+1 might suddenly form a single character (grapheme cluster) in a new Unicode version. In that case, the decoding code will have to decide which formatting attributes to apply to this new character. But this is a much smaller issue because it only affects the characters in question. Unlike our original example, where an off-by-one error in run N would affect all subsequent runs, all other runs are untouched. Related forum discussion: Itai Ferber on why Character isnât Codable. Storing string offsets is a bad idea We can extract a general lesson out of this: Donât store string indices or offsets if possible. They arenât stable over time or across runtime environments. On Apple platforms, the Swift standard library ships as part of the OS so Iâd guess that the standard libraryâs grapheme breaking algorithm will be based on the same Unicode version that ships with the corresponding OS version. This is effectively no change in behavior compared to the pre-Swift 5.6 world (where the OSâs ICU library determined the Unicode version). On non-ABI-stable platforms (e.g. Linux and Windows), the Unicode version used by your program is determined by the version of the Swift compiler your program is compiled with, if my understanding is correct. â©ïž The Swift standard library doesnât have APIs for Unicode normalization yet, but you can use the corresponding NSString APIs, which are automatically added to String when you import Foundation: import Foundation let precomposed = "Ă©".precomposedStringWithCanonicalMapping let decomposed = "Ă©".decomposedStringWithCanonicalMapping precomposed == decomposed // â true precomposed.unicodeScalars.count // â 1 decomposed.unicodeScalars.count // â 2 precomposed.utf8.count // â 2 decomposed.utf8.count // â 3 â©ïž By the way, I see a lot of code using String(jsonData, encoding: .utf8)! to create a string from UTF-8 data. String(decoding: jsonData, as: UTF8.self) saves you a force-unwrap and is arguably âcleanerâ because it doesnât depend on Foundation. Since it never fails, itâll insert replacement characters into the string if it encounters invalid byte sequences. â©ïž
A heterogeneous dictionary with strong types in Swift
The environment in SwiftUI is sort of like a global dictionary but with stronger types: each key (represented by a key path) can have its own specific value type. For example, the \.isEnabled key stores a boolean value, whereas the \.font key stores an Optional<Font>. I wrote a custom dictionary type that can do the same thing. The HeterogeneousDictionary struct I show in this article stores mixed key-value pairs where each key defines the type of value it stores. The public API is fully type-safe, no casting required. Usage Iâll start with an example of the finished API. Hereâs a dictionary for storing text formatting attributes: import AppKit var dict = HeterogeneousDictionary<TextAttributes>() dict[ForegroundColor.self] // â nil // The value type of this key is NSColor dict[ForegroundColor.self] = NSColor.systemRed dict[ForegroundColor.self] // â NSColor.systemRed dict[FontSize.self] // â nil // The value type of this key is Double dict[FontSize.self] = 24 dict[FontSize.self] // â 24 (type: Optional<Double>) We also need some boilerplate to define the set of keys and their associated value types. The code to do this for three keys (font, font size, foreground color) looks like this: // The domain (aka "keyspace") enum TextAttributes {} struct FontSize: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = Double } struct Font: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = NSFont } struct ForegroundColor: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = NSColor } Yes, this is fairly long, which is one of the downsides of this approach. At least you only have to write it once per âkeyspaceâ. Iâll walk you through it step by step. Notes on the API Using types as keys As you can see in this line, the dictionary keys are types (more precisely, metatype values): dict[FontSize.self] = 24 This is another parallel with the SwiftUI environment, which also uses types as keys (the public environment API uses key paths as keys, but youâll see the types underneath if you ever define your own environment key). Why use types as keys? We want to establish a relationship between a key and the type of values it stores, and we want to make this connection known to the type system. The way to do this is by defining a type that sets up this link. Domains aka âkeyspacesâ A standard Dictionary is generic over its key and value types. This doesnât work for our heterogeneous dictionary because we have multiple value types (and we want more type safety than Any provides). Instead, a HeterogeneousDictionary is parameterized with a domain: // The domain (aka "keyspace") enum TextAttributes {} var dict = HeterogeneousDictionary<TextAttributes>() The domain is the âkeyspaceâ that defines the set of legal keys for this dictionary. Only keys that belong to the domain can be put into the dictionary. The domain type has no protocol constraints; you can use any type for this. Defining keys A key is a type that conforms to the HeterogeneousDictionaryKey protocol. The protocol has two associated types that define the relationships between the key and its domain and value type: protocol HeterogeneousDictionaryKey { /// The "namespace" the key belongs to. associatedtype Domain /// The type of values that can be stored /// under this key in the dictionary. associatedtype Value } You define a key by creating a type and adding the conformance: struct Font: HeterogeneousDictionaryKey { typealias Domain = TextAttributes typealias Value = NSFont } Implementation notes A minimal implementation of the dictionary type is quite short: struct HeterogeneousDictionary<Domain> { private var storage: [ObjectIdentifier: Any] = [:] var count: Int { self.storage.count } subscript<Key>(key: Key.Type) -> Key.Value? where Key: HeterogeneousDictionaryKey, Key.Domain == Domain { get { self.storage[ObjectIdentifier(key)] as! Key.Value? } set { self.storage[ObjectIdentifier(key)] = newValue } } } Internal storage private var storage: [ObjectIdentifier: Any] = [:] Internally, HeterogeneousDictionary uses a dictionary of type [ObjectIdentifier: Any] for storage. We canât use a metatype such as Font.self directly as a dictionary key because metatypes arenât hashable. But we can use the metatypeâs ObjectIdentifier, which is essentially the address of the typeâs representation in memory. Subscript subscript<Key>(key: Key.Type) -> Key.Value? where Key: HeterogeneousDictionaryKey, Key.Domain == Domain { get { self.storage[ObjectIdentifier(key)] as! Key.Value? } set { self.storage[ObjectIdentifier(key)] = newValue } } The subscript implementation constrains its arguments to keys in the same domain as the dictionaryâs domain. This ensures that you canât subscript a dictionary for text attributes with some other unrelated key. If you find this too restrictive, you could also remove all references to the Domain type from the code; it would still work. Using key paths as keys Types as keys donât have the best syntax. I think youâll agree that dict[FontSize.self] doesnât read as nice as dict[\.fontSize], so I looked into providing a convenience API based on key paths. My preferred solution would be if users could define static helper properties on the domain type, which the dictionary subscript would then accept as key paths, like so: extension TextAttributes { static var fontSize: FontSize.Type { FontSize.self } // Same for font and foregroundColor } Sadly, this doesnât work because Swift 5.6 doesnât (yet?) support key paths to static properties (relevant forum thread). We have to introduce a separate helper type that acts as a namespace for these helper properties. Since the dictionary type can create an instance of the helper type, it can access the non-static helper properties. This doesnât feel as clean to me, but it works. I called the helper type HeterogeneousDictionaryValues as a parallel with EnvironmentValues, which serves the same purpose in SwiftUI. The code for this is included in the Gist. Drawbacks Is the HeterogeneousDictionary type useful? Iâm not sure. I wrote this mostly as an exercise and havenât used it yet in a real project. In most cases, if you need a heterogeneous record with full type safety, itâs probably easier to just write a new struct where each property is optional â the boilerplate for defining the dictionary keys is certainly longer and harder to read. For representing partial values, i.e. struct-like records where some but not all properties have values, take a look at these two approaches from 2018: Ian Keen, Type-safe temporary models (2018-06-05) Joseph Duffy, Partial in Swift (2018-07-10), also available as a library These use a similar storage approach (a dictionary of Any values with custom accessors to make it type-safe), but they use an existing struct as the domain/keyspace, combined with partial key paths into that struct as the keys. I honestly think that this is the better design for most situations. Aside from the boilerplate, here are a few more weaknesses of HeterogeneousDictionary: Storage is inefficient because values are boxed in Any containers Accessing values is inefficient: every access requires unboxing HeterogeneousDictionary canât easily conform to Sequence and Collection because these protocols require a uniform element type The code The full code is available in a Gist.
Advanced Swift, fifth edition
We released the fifth edition of our book Advanced Swift a few days ago. You can buy the ebook on the objc.io site. The hardcover print edition is printed and sold by Amazon (amazon.com, amazon.co.uk, amazon.de). Highlights of the new edition: Fully updated for Swift 5.6 A new Concurrency chapter covering async/await, structured concurrency, and actors New content on property wrappers, result builders, protocols, and generics The print edition is now a hardcover (for the same price) Free update for owners of the ebook A growing book for a growing language Updating the book always turns out to be more work than I expect. Swift has grown substantially since our last release (for Swift 5.0), and the size of the book reflects this. The fifth edition is 76âŻ% longer than the first edition from 2016. This time, we barely stayed under 1 million characters: Character counts of Advanced Swift editions from 2016â2022. Many thanks to our editor, Natalye, for reading all this and improving our Dutch/German dialect of English. Hardcover For the first time, the print edition comes in hardcover (for the same price). Being able to offer this makes me very happy. The hardcover book looks much better and is more likely to stay open when laid flat on a table. We also increased the page size from 15Ă23âŻcm (6Ă9 in) to 18Ă25âŻcm (7Ă10 in) to keep the page count manageable (Amazonâs print on demand service limits hardcover books to 550 pages). I hope you enjoy the new edition. If you decide to buy the book or if you bought it in the past, thank you very much! And if youâre willing to write a review on Amazon, weâd appreciate it.
Synchronous functions can support cancellation too
Cancellation is a Swift concurrency feature, but this doesnât mean itâs only available in async functions. Synchronous functions can also support cancellation, and by doing so theyâll become better concurrency citizens when called from async code. Motivating example: JSONDecoder Supporting cancellation makes sense for functions that can block for significant amounts of time (say, more than a few milliseconds). Take JSON decoding as an example. Suppose we wrote an async function that performs a network request and decodes the downloaded JSON data: import Foundation func loadJSON<T: Decodable>(_ type: T.Type, from url: URL) async throws -> T { let (data, _) = try await URLSession.shared.data(from: url) return try JSONDecoder().decode(type, from: data) } The JSONDecoder.decode call is synchronous: it will block its thread until it completes. And if the download is large, decoding may take hundreds of milliseconds or even longer. Avoid blocking if possible In general, async code should avoid calling blocking APIs if possible. Instead, async functions are expected to suspend regularly to give waiting tasks a chance to run. But JSONDecoder doesnât have an async API (yet?), and Iâm not even sure it can provide one that works with the existing Codable protocols, so letâs work with what we have. And if you think about it, itâs not totally unreasonable for JSONDecoder to block. After all, it is performing CPU-intensive work (assuming the data itâs working on doesnât have to be paged in), and this work has to happen on some thread. Async/await works best for I/O-bound functions that spend most of their time waiting for the disk or the network. If an I/O-bound function suspends, the runtime can give the functionâs thread to another task that can make more productive use of the CPU. Responding to cancellation Cancellation is a cooperative process. Canceling a task only sets a flag in the taskâs metadata. Itâs up to individual functions to periodically check for cancellation and abort if necessary. If a function doesnât respond promptly to cancellation or outright ignores the cancellation flag, the program may appear to the user to be stalling. Now, if the task is canceled while JSONDecoder.decode is running, our loadJSON function canât react properly because it canât interrupt the decoding process. To fix this, the decode method would have to perform its own periodic cancellation checks, using the usual APIs, Task.isCancelled or Task.checkCancellation(). These can be called from anywhere, including synchronous code. Internals How does this work? How can synchronous code access task-specific metadata? Hereâs the code for Task.isCancelled in the standard library: extension Task where Success == Never, Failure == Never { public static var isCancelled: Bool { withUnsafeCurrentTask { task in task?.isCancelled ?? false } } } This calls withUnsafeCurrentTask to get a handle to the current task. When the runtime schedules a task to run on a particular thread, it stores a pointer to the task object in that threadâs thread-local storage, where any code running on that thread â sync or async â can access it. If task == nil, there is no current task, i.e. we havenât been called (directly or indirectly) from an async function. In this case, cancellation doesnât apply, so we can return false. If we do have a task handle, we ask the task for its isCancelled flag and return that. Reading the flag is an atomic (thread-safe) operation because other threads may be writing to it concurrently. Conclusion I hope weâll see cancellation support in the Foundation encoders and decoders in the future. If you have written synchronous functions that can potentially block their thread for a significant amount of time, consider adding periodic cancellation checks. Itâs a quick way to make your code work better with the concurrency system, and you donât even have to change your API to do it. Update February 2, 2022: Jordan Rose argues that cancellation support for synchronous functions should be opt-in because it introduces a failure mode thatâs hard to reason about locally as the âsourceâ of the failure (the async context) may be several levels removed from the call site. Definitely something to consider!
Cancellation can come in many forms
In Swiftâs concurrency model, cancellation is cooperative. To be a good concurrency citizen, code must periodically check if the current task has been cancelled, and react accordingly. You can check for cancellation by calling Task.isCancelled or with try Task.checkCancellation() â the latter will exit by throwing a CancellationError if the task has been cancelled. By convention, functions should react to cancellation by throwing a CancellationError. But this convention isnât enforced, so callers must be aware that cancellation can manifest itself in other forms. Here are some other ways how functions might respond to cancellation: Throw a different error. For example, the async networking APIs in Foundation, such as URLSession.data(from: URL), throw a URLError with the code URLError.Code.cancelled on cancellation. Itâd be nice if URLSession translated this error to CancellationError, but it doesnât. Return a partial result. A function that has completed part of its work when cancellation occurs may choose to return a partial result rather than throwing the work away and aborting. In fact, this may be the best choice for a non-throwing function. But note that this behavior can be extremely surprising to callers, so be sure to document it clearly. Do nothing. Functions are supposed to react promptly to cancellation, but callers must assume the worst. Even if cancelled, a function might run to completion and finish normally. Or it might eventually respond to cancellation by aborting, but not promptly because it doesnât perform its cancellation checks often enough. So as the caller of a function, you canât really rely on specific cancellation behavior unless you know how the callee is implemented. Code that wants to know if its task has been cancelled should itself call Task.isCancelled, rather than counting on catching a CancellationError from a callee.

Software Development News
This week in AI updates: Amazon Quick Suite, Gemini Enterprise, and more (October 10, 2025)
- Latest News
- Amazon
- Atlassian
- IBM
- OpenAI
Amazon launches Quick Suite to provide agentic AI across applications and AWS services Amazon Quick Suite allows users to ask questions, conduct deep research, analyze and visualize data, and create automations. It can connect to internal repositories, like wikis or intranet, and AWS services. Amazon also offers 50+ built-in connectors to applications like Adobe Analytics, ⊠continue reading
The post This week in AI updates: Amazon Quick Suite, Gemini Enterprise, and more (October 10, 2025) appeared first on SD Times.
Amazon launches Quick Suite to provide agentic AI across applications and AWS services Amazon Quick Suite allows users to ask questions, conduct deep research, analyze and visualize data, and create automations. It can connect to internal repositories, like wikis or intranet, and AWS services. Amazon also offers 50+ built-in connectors to applications like Adobe Analytics, SharePoint, Snowflake, Google Drive, OneDrive, Outlook, ServiceNow, and Databricks, as well as support for over 1,000+ apps via connecting to their MCP servers. This deep connection across the enterprise enables Quick Sight to analyze data across all of a companyâs systems and create complex business workflows across multiple applications and departments. âUnlike traditional business intelligence tools that work only with databases and data warehouses, Quick Sightâs agentic experience analyzes all forms of data across all your systems and apps, including your documents,â Amazon wrote in a blog post. Google unveils Gemini Enterprise to offer companies a more unified platform for AI innovation Google is announcing a new offering built around Gemini, designed specifically with large enterprise use in mind. Gemini Enterprise consolidates six core components: Advanced Gemini models A no-code workbench for analyzing information and orchestrating agents Pre-built Google agents for tasks like deep research or data insights The ability to connect to company data A central governance framework for visualizing and securing all agents Access to an ecosystem of over 100,000 industry partners âBy bringing all of these components together through a single interface, Gemini Enterprise transforms how teams work. It moves beyond simple tasks to automate entire workflows and drive smarter business outcomes â all on Googleâs secure, enterprise-grade architecture,â Thomas Kurian, CEO of Google Cloud, wrote in a blog post. Atlassian shares major updates to its genAI assistant Rovo at Team â25 Europe Atlassian is hosting its annual user conference Team â25 Europe this week in Barcelona, and during the event, the company shared several new and upcoming updates to its generative AI assistant Rovo. Atlassian announced the general availability of its AI coding agent Rovo Dev. Rovo Dev can help with code reviews, documentation, dependency cleanups, and more, and it leverages context from tickets, docs, incidents, and business goals to provide developers with information that will help them make more informed decisions. Additionally, starting early next year, Rovo Search will become the default search in Jira, which will allow Jiraâs search to suggest relevant issues and projects. Rovo Chat will also be getting over 100 out-of-the-box modular capabilities from Atlassian and its partners that can be used in chat, agents, and workflows. Other new Chat capabilities include the ability to remember past conversations and preferences and a new collaborative workspace called Canvas. Google launches ecosystem of extensions for Gemini CLI Google is launching Gemini CLI extensions to allow different development tools to connect up to the Gemini CLI. Each extension includes a playbook that teaches the CLI how to effectively use that tool, eliminating the need for developers to configure them. âIf you want to look under the hood, Gemini CLI extensions package instructions, MCP servers and custom commands into a familiar and user-friendly format,â Google wrote in a blog post. Twenty-two extensions are available at launch from Google partners Atlassian, Canva, Confluent, Dynatrace, Elastic, Figma, GitLab, Grafana Labs, Harness, HashiCorp, MongoDB, Neo4j, Pinecone, Postman, Qodo, Shopify, Snyk, Sonar, Stripe, ThoughtSpot, Weights & Biases by CoreWeave, and WIX. IBM adds new capabilities to watsonx Orchestrate to facilitate agentic AI at scale As IBM kicked off its annual developer event TechXchange 2025, it announced several new capabilities to enable organizations to unlock value from agentic AI. âThereâs certainly been a lot of buzz in the industry,â said Bruno Aziza, vice president of Data, AI, and Analytics Strategy at IBM Software. âI think if you look at the context of everything thatâs going on, customers are struggling. Theyâre struggling to get value from their investment. It announced many updates to its AI agent orchestration platform, watsonx Orchestrate. The platform now includes AgentOps, an observability and governance layer for AI agents; Agentic Workflows, standardized and reusable flows that can be used to build and sequence multi-agent systems; and Langflow integration to reduce agent setup time. OpenAI DevDay: ChatGPT Apps, AgentKit, and GA release of Codex OpenAI held its annual Developer Day event this week where it announced several updates to its products. The company unveiled apps in ChatGPT as well as an SDK for developers to build them. Companies that have created apps that are already available include Booking.com, Canva, Coursera, Figma, Expedia, Spotify, and Zillow. When a user says the name of an available app in a prompt, ChatGPT will automatically surface that app in the chat. For example, saying âSpotify, make a playlist for my party this Fridayâ will bring in the Spotify app. ChatGPT will also be able to suggest apps when it thinks theyâre relevant to the conversation, such as suggesting Zillowâs app in a conversation about buying a house. Googleâs coding agent Jules now works in the command line Googleâs coding agent Jules now can be used directly in developerâs command lines so that it can act as more of a coding companion. According to Google, it created this new command line interfaceâcalled Jules Toolsâout of a recognition that the terminal is where developers spend most of their time. Jules Tools allows developers to spin up tasks, inspect what Jules is doing, and integrate Jules into automation. âThink of Jules Tools as both a dashboard and a command surface for your coding agent,â Google wrote in a blog post. The post This week in AI updates: Amazon Quick Suite, Gemini Enterprise, and more (October 10, 2025) appeared first on SD Times.
From vibe coding to vibe deployment: Closing the prototype-to-production gap
- Latest News
- Opinion
- Agent
- AI assistant
- LLM
- vibe coding
In February 2025, Andrej Karpathy coined the term âvibe codingâ with a tweet that instantly resonated across the developer community. The idea was simple yet powerful: instead of writing code line-by-line, you describe what you want in natural language, and an AI model scaffolds the entire solution. No formal specs, no boilerplate grind, just vibes. ⊠continue reading
The post From vibe coding to vibe deployment: Closing the prototype-to-production gap appeared first on SD Times.
In February 2025, Andrej Karpathy coined the term âvibe codingâ with a tweet that instantly resonated across the developer community. The idea was simple yet powerful: instead of writing code line-by-line, you describe what you want in natural language, and an AI model scaffolds the entire solution. No formal specs, no boilerplate grind, just vibes. Vibe coding quickly gained traction because it removed the friction from starting a project. In minutes, developers could go from a vague product idea to a working prototype. It wasnât just about speed, it was about fluid creativity. Teams could explore ideas without committing weeks of engineering time. The viral demo, like the one Satya Nadella did and various experiments, reinforced the feeling that AI-assisted development wasnât just a curiosity; it was a glimpse into the future of software creation. But even in those early days, there was an unspoken reality: while AI could âvibeâ out an MVP, the leap from prototype to production remained a formidable gap. That gap would soon become the central challenge for the next evolution of this trend. The Hard Part: Why Prototypes Rarely Survive Contact with Prod Vibe coding excels at ideation speed but struggles at deployment rigor. The path to production isnât a straight line; itâs a maze of choices, constraints, and governance. A typical production deployment forces teams to make dozens of decisions: Language and runtime versions â not all are equally supported or approved in your environment. For example, your org may only certify Java 21 and Node.js 18 for production, but the agent picks Python 3.12 with a new async library that ops doesnât support yet. Infrastructure choices â Kubernetes? Serverless? VM-based? Each has its own scaling, networking, and security model. A prototype might assume AWS Lambda, but your preferred cloud provider is different. The choice of infrastructure will change the architecture as well. Third-party integrations â Most of the solutions will need to be integrated with third-party systems via means like APIs, webhooks. There will be multiple such third-party systems to get one task done and that single selected system will have multiple API versions as well, which will differ significantly in functionality, authentication flows, and pricing. AI model usage â not every model is approved, and cost or privacy rules can limit choices. A developer might prototype with GPT-4o via a public API, but the organization only allows an internally hosted model for compliance and privacy reasons. This combinatorial explosion overwhelms both human developers and AI agents. Without constraints, the agent might produce an architecture thatâs elegant in theory but incompatible with your production environment. Without guardrails, it may introduce security gaps, performance risks, or compliance violations that surface only after deployment. Operational realities, uptime SLAs, cost budgets, compliance checks, change management require deliberate engineering discipline. These arenât things AI can guess; they have to be encoded in the system it works within. The result? Many vibe-coded prototypes either stall before deployment or require a full rewrite to meet production standards. The creative energy that made the prototype exciting gets bogged down in the slow grind of last-mile engineering. Thesis: Constrain to Empower â Give the Agent a Bounded Context The common instinct when working with large language models (LLMs) is to give them maximum freedom, more options, more tools. But in software delivery, this is exactly what causes them to fail. When an agent has to choose between every possible language, runtime, library, deployment pattern, and infrastructure configuration, itâs like asking a chef to cook a meal in a grocery store the size of a city, too many possibilities, no constraints, and no guarantee the ingredients will even work together. The real unlock for vibe deployment is constraint. Not arbitrary limits, but opinionated defaults baked into an Internal Developer Platform (IDP): A curated menu of programming languages and runtime versions that the organization supports and maintains. A blessed list of third-party services and APIs with approved versions and security reviews. Pre-defined infrastructure classes (databases, queues, storage) that align with organizational SLAs and cost models. A finite set of approved AI models and APIs with clear usage guidelines. This âbounded contextâ transforms the agentâs job. Instead of inventing an arbitrary solution, it assembles a system from known-good, production-ready building blocks. That means every artifact it generates, from application code to Kubernetes manifests is deployable on day one. Like providing a well-designed countertop with selected utensils and ingredients to a chef. In other words: freedom at the creative level, discipline at the operational level. The Interface: Exposing the Platform via MCP An opinionated platform is only useful if the agent can understand and operate within it. Thatâs where the Model Context Protocol (MCP) comes in. MCP is like the menu interface between your internal developer platform and the AI agent. Instead of the agent guessing: âWhat database engines are allowed here? Which version of the Salesforce API is approved?â it can ask the platform directly via MCP, and the platform responds with an authoritative answer. MCP Server will run alongside your IDP, exposing a set of structured capabilities (tools, metadata). Capabilities Catalog â lists the approved options for languages, libraries, infra resources, deployment patterns, and third-party APIs through tool descriptions Golden Path Templates â accessible via tool descriptions so the agent can scaffold new projects with the correct structure, configuration, and security posture. Provisioning & Governance APIs â accessible through MCP tools, letting the agent request infra or run policy checks without leaving the bounded context. For the LLM, MCP isnât just an API endpoint; itâs the operational reality of your platform made machine-readable and operable. This makes the difference between âthe agent might generate something deployableâ and âthe agent always generates something deployable.â In our chef analogy, MCP is like the kitchen manager who hands over the pantry map and the menus to the chef, through which the chef learns the ingredients and utensils available to him so that he will not try to make wood-fired pizza with a gas oven. Reference Architecture: âPrompt-to-Prodâ Flow Based on the above combination of above thesis and interface sections, we can arrive at a reference architecture for vibe deployment. The reference architecture for vibe deployment is a five-step framework that pairs platform opinionation with agent guidance: Inventory & Opinionate Choose blessed languages, versions, third-party dependencies, infrastructure classes (databases, queues, storage), and deployment architectures(VM, Kubernetes). Define blueprints, templates and golden paths which bundle the above curated inventory and offer opinionated experiences. These will be abstractions that your business platform will use, like backend components, web apps, and tasks. Golden path will be a definition that says for backend services use Go version 10 with MySQL database. Clearly document whatâs in scope and off-menu so both humans and agents operate within the same boundaries. Build / Modify the Platform Adapt your internal developer platform to reflect these opinionated decisions. This will include adding new infrastructure and services to make available the opinionated resources. If you decide on lang version 10 then this means having proper base images in container registries. If you decide on a particular third party dependency then this means having a subscription and keeping that subscription information in your configuration stores or key vaults. Bake in golden-path templates, pre-configured infrastructure definitions, and built-in governance checks. Implement the defined blueprints and golden paths using the newly added platform capabilities. This would include integrating earlier added infrastructure and services through kubernetes manifests, helm charts in a way to provide curated experience Expose via MCP Server Once the platform is available, itâs about implementing the interface. This interface should be self-describable and machine-readable. Characteristics that clearly suit MCP. Expose capabilities that highlight opinionated boundaries â from API versions to infrastructure limits â so the agent has a bounded context to operate in. Capabilities should be self-describable and machine-friendly as well. This will include well-thought-out tool descriptions that agents can use to make better decisions. Refine and Iterate Test the prompt-to-prod flow with real development teams. Iteration is what makes all this work. Given the composition of the platform differs there is no golden rule. It is about testing and improving the tool descriptions. Fine-tune MCP tools based on feedback. Based on the feedback received on testing, keep changing tool descriptions and at times will require API changes as well. This may even require a change of opinions that are too rigid. Vibe Deploy Away! With the foundation set, teams can move seamlessly from vibe coding to production deployment with a single prompt. Monitor outcomes to ensure that speed gains do not erode reliability or maintainability. What to Measure: Proving Itâs More Than a Demo The danger with hype-driven trends is that they work beautifully in demos but collapse under the weight of real-world constraints. Vibe deployment avoids that â but only if you measure the right things. The âwhyâ here is simple: if we donât track outcomes, vibe-coded apps could quietly introduce maintenance headaches and drag out lead times just like any rushed project. Guardrails are only useful if we know theyâre holding. So what do we measure? Lead time for changes â Are we actually delivering faster after the first release, not just for v1? Change failure rate â Are we keeping production stability even as we speed up? MTTR (Mean Time to Recovery) â When something breaks, do we recover quickly? Infra cost per service â Are we keeping deployments cost-efficient and predictable? These metrics tell you whether vibe deployment is delivering sustained value or just front-loading the development cycle with speed that you pay for later in technical debt. For platform leaders, this is a call to action: Stop thinking of opinionation as a limitation; start treating it as the enabler for AI-powered delivery. Encode your best practices, compliance rules, and architectural patterns into the platform itself. Measure relentlessly to ensure that speed doesnât erode stability. The future of software delivery isnât âprompt to prototype.â Itâs prompt to production â without skipping the engineering discipline that keeps systems healthy. The tools exist. The patterns are here. The only question is whether youâll make the leap. The post From vibe coding to vibe deployment: Closing the prototype-to-production gap appeared first on SD Times.
Redis Acquires Featureform to Help Developers Deliver Real-time Structured Data into AI Agents
- NewsWire
Redis today announced the acquisition of Featureform, a powerful framework for managing, defining, and orchestrating structured data signals. The acquisition helps Redis solve one of the most critical challenges developers face with production AI: getting structured data into models quickly, reliably, and with full observability. AI agents need more than LLMsâthey depend on real-time data, ⊠continue reading
The post Redis Acquires Featureform to Help Developers Deliver Real-time Structured Data into AI Agents appeared first on SD Times.
Redis today announced the acquisition of Featureform, a powerful framework for managing, defining, and orchestrating structured data signals. The acquisition helps Redis solve one of the most critical challenges developers face with production AI: getting structured data into models quickly, reliably, and with full observability. AI agents need more than LLMsâthey depend on real-time data, past interactions, and knowledge bases to deliver accurate results. Getting that context to models at the right time remains a persistent challenge in deploying AI apps and agents into production, and delivering an exceptional experience to users once in production. Redis makes it simple by serving as the real-time data platform developers trust to power AI agents with memory, knowledge, and structured dataânow supercharged by the integration of Featureform. Featureform will become a part of Redisâ feature store solution, complementing the fastest benchmarked vector database powered by Redis Query Engine, and the most advanced semantic caching service, Redis LangCache. Featureform will allow developers to: Define features as reusable, versioned pipelines Unify training and inference workflows across batch and streaming Maintain point-in-time correctness for offline model training Serve low-latency features using Redis in production Detect data drift and monitor changes to feature distributions âAdding Featureform immediately allows Redis to serve more AI development use cases with speed and simplicity,â said Rowan Trollope, CEO of Redis. âBy integrating Featureformâs powerful framework into our platform, weâre better enabling developers to deliver context to agents at exactly the right moment, so they reason, act, and interact accurately and intuitively.â âIn joining forces with Redis, weâre uniting the orchestration layer that serves contextual data to models and agents with the worldâs leading real-time data platform,â said Simba Khadder, Featureformâs founder and CEO. âTogether, weâre building the context engine for AI and agents, enabling developers to deliver the right data at the right time to power the next generation of intelligent systems.â The post Redis Acquires Featureform to Help Developers Deliver Real-time Structured Data into AI Agents appeared first on SD Times.
Redgate Softwareâs Data Modeler aims to help companies that manage multiple database platforms
- Latest News
- data
- Redgate
Redgate Software has announced the launch of its new data modeling tool that helps organizations deal with the complexity of working with multiple database platforms. Redgate Data Modeler is a rebranded version of the Vertabelo Data Modeler, following Redgateâs acquisition of the company. According to Redgate, the tool had already been used by thousands of ⊠continue reading
The post Redgate Softwareâs Data Modeler aims to help companies that manage multiple database platforms appeared first on SD Times.
Redgate Software has announced the launch of its new data modeling tool that helps organizations deal with the complexity of working with multiple database platforms. Redgate Data Modeler is a rebranded version of the Vertabelo Data Modeler, following Redgateâs acquisition of the company. According to Redgate, the tool had already been used by thousands of data professionals, and will now be able to take advantage of a stronger product roadmap backed by Redgate. Redgate recently conducted a report where it found that 79% of organizations surveyed use more than one database platform and 29% use more than five. Redgate Data Modeler supports 10 different database types and allows companies to visualize, document, and evolve their database structures without having to write an SQL. It currently supports Microsoft SQL Server, Oracle, MySQL, PostgreSQL, SQLite, IBM DB2, Snowflake, Amazon Redshift, HyperSQL, and Google Big Query. âWeâve been aware of Vertabeloâs presence in the database space for some time, and it quickly became clear that the companyâs offerings were a natural fit with Redgate,â said Jakub Lamik, CEO of Redgate. âVertabelo brings a SaaS-native, platform-agnostic modeling solution that complements our existing strengths and extends our reach beyond SQL Server. This acquisition strengthens our ability to support every stage of the database development lifecycle â from design through to deployment â and accelerates our portfolio offerings towards increased productivity and automated database development. With Redgate Data Modeler, customers gain a powerful, collaborative tool that reduces complexity, increases reliability, and helps teams deliver value faster.â According to the company, this announcement follows other integrations of acquired technologies into Redgateâs product portfolio, including the open-source data migration tool Flyway and the database ranking company DB-Engines. âThe launch of Redgate Data Modeler continues this trajectory, laying the foundation for further innovation in multi-database, cloud-based productivity solutions,â Redgate wrote in an announcement. The post Redgate Softwareâs Data Modeler aims to help companies that manage multiple database platforms appeared first on SD Times.
Google unveils Gemini Enterprise to offer companies a more unified platform for AI innovation
- Latest News
- AI
- gemini
Google is announcing a new offering built around Gemini, designed specifically with large enterprise use in mind. Gemini Enterprise consolidates six core components: Advanced Gemini models A no-code workbench for analyzing information and orchestrating agents Pre-built Google agents for tasks like deep research or data insights The ability to connect to company data A central ⊠continue reading
The post Google unveils Gemini Enterprise to offer companies a more unified platform for AI innovation appeared first on SD Times.
Google is announcing a new offering built around Gemini, designed specifically with large enterprise use in mind. Gemini Enterprise consolidates six core components: Advanced Gemini models A no-code workbench for analyzing information and orchestrating agents Pre-built Google agents for tasks like deep research or data insights The ability to connect to company data A central governance framework for visualizing and securing all agents Access to an ecosystem of over 100,000 industry partners âBy bringing all of these components together through a single interface, Gemini Enterprise transforms how teams work. It moves beyond simple tasks to automate entire workflows and drive smarter business outcomes â all on Googleâs secure, enterprise-grade architecture,â Thomas Kurian, CEO of Google Cloud, wrote in a blog post. Gemini Enterprise offers tools for building custom conversational agents. For example, using a low-code visual builder, companies will be able to build a customer engagement agent and then configure it for all channels, including phone, web, mobile, email, and chat. These agents are powered by the latest Gemini models and feature natural-sounding voices that can adjust to real-world background noise on a phone call. Additionally, Google is announcing the Gemini Enterprise Agent Ready (GEAR) program to help companies upskill their developers to be able to build and deploy agents. According to Google, using Gemini Enterprise with Google Workspace allows organizations to utilize multi-modal agents that are built into their Workspace apps. For example, Google Vids can be used to transform a presentation into a video with an AI-generated script and voiceover. The company is also announcing a preview of its new Data Science Agent, which helps accelerate data exploration. It can instantly find patterns and generate multi-step plans for training and inferencing. âAs AI transforms organizations around the world, Google is the only partner with the full set of offerings that you can tailor to your organizationâs needs. And most importantly, we are delivering real business value to help you drive ROI from your AI investments. This is the power of Gemini Enterprise: the new front door for AI in the workplace. Weâre bringing the best of Google AI to every employee, for every workflow. And weâre excited to support you every step of the way,â Kurian concluded. The post Google unveils Gemini Enterprise to offer companies a more unified platform for AI innovation appeared first on SD Times.
Atlassian shares major updates to its genAI assistant Rovo at Team â25 Europe
- Latest News
- AI
- Atlassian
- rovo
Atlassian is hosting its annual user conference Team â25 Europe this week in Barcelona, and during the event, the company shared several new and upcoming updates to its generative AI assistant Rovo. âWith so many customers embracing our AI vision, weâre not slowing down. Weâve been busy packing Rovo full of new AI-powered skills, smarts, ⊠continue reading
The post Atlassian shares major updates to its genAI assistant Rovo at Team â25 Europe appeared first on SD Times.
Atlassian is hosting its annual user conference Team â25 Europe this week in Barcelona, and during the event, the company shared several new and upcoming updates to its generative AI assistant Rovo. âWith so many customers embracing our AI vision, weâre not slowing down. Weâve been busy packing Rovo full of new AI-powered skills, smarts, and places to transform your workday,â the company wrote in a blog post. Atlassian announced the general availability of its AI coding agent Rovo Dev. Rovo Dev can help with code reviews, documentation, dependency cleanups, and more, and it leverages context from tickets, docs, incidents, and business goals to provide developers with information that will help them make more informed decisions. âPowered by Atlassianâs Teamwork Graph, Rovo Dev understands your company, your projects, and your goalsâand connects the dots across Jira, Confluence, Bitbucket, Compass, Jira Service Management, and more. Itâs not just about code: Rovo Dev brings together business context, documentation, tickets, and code history, so developers always have the full story at their fingertips,â Atlassian wrote in a blog post. Additionally, starting early next year, Rovo Search will become the default search in Jira, which will allow Jiraâs search to suggest relevant issues and projects. Rovo Chat will also be getting over 100 out-of-the-box modular capabilities from Atlassian and its partners that can be used in chat, agents, and workflows. Other new Chat capabilities include the ability to remember past conversations and preferences and a new collaborative workspace called Canvas. Updates to Rovo Studio, a platform that brings together all of Atlassianâs building tools, include the ability to create agents that combine multiple Rovo Skills, fine-grained permissions for agents, and Automations to allow agents to be embedded into cross-team workflows. The company also announced that desktop and mobile versions of Rovo will be available soon, and the mobile app will be able to be downloaded by users who donât have a paid Atlassian subscription for $5/user. In addition to these updates to Rovo, Atlassian also announced its Software Collection and Service Collection. The Software Collection is composed of Rovo Dev, Bitbucket, Pipelines, Compass, and DX, and provides insights across the SDLC so that engineering leaders can see where their teams are getting stuck and why. The Service Collection combines Jira Service Management, Customer Service Management, and Assets into a single platform and subscription. The post Atlassian shares major updates to its genAI assistant Rovo at Team â25 Europe appeared first on SD Times.
Google launches ecosystem of extensions for Gemini CLI
- Latest News
- AI
- gemini
- gemini cli
Google is launching Gemini CLI extensions to allow different development tools to connect up to the Gemini CLI. Each extension includes a playbook that teaches the CLI how to effectively use that tool, eliminating the need for developers to configure them. âIf you want to look under the hood, Gemini CLI extensions package instructions, MCP ⊠continue reading
The post Google launches ecosystem of extensions for Gemini CLI appeared first on SD Times.
Google is launching Gemini CLI extensions to allow different development tools to connect up to the Gemini CLI. Each extension includes a playbook that teaches the CLI how to effectively use that tool, eliminating the need for developers to configure them. âIf you want to look under the hood, Gemini CLI extensions package instructions, MCP servers and custom commands into a familiar and user-friendly format,â Google wrote in a blog post. Twenty-two extensions are available at launch from Google partners Atlassian, Canva, Confluent, Dynatrace, Elastic, Figma, GitLab, Grafana Labs, Harness, HashiCorp, MongoDB, Neo4j, Pinecone, Postman, Qodo, Shopify, Snyk, Sonar, Stripe, ThoughtSpot, Weights & Biases by CoreWeave, and WIX. Google itself offers several extensions to connect the Gemini CLI to its other tools, such as Chrome DevTools, Firebase, Flutter, Genkit, and more. It also supports extensions that were created by the community. Google has created a new website that lists all of the available extensions to make it easier for developers to see what their options are, and the extensions are sorted by how many GitHub stars they have. Extensions can be installed directly from the command line in a single command. âGemini CLI extensions put you in control. You can combine extensions, chain commands and build a personalized toolchain that perfectly fits the way you work. Whether you want to streamline a personal workflow or integrate a companyâs internal tools, you now have the power to create the command-line experience youâve always wanted,â the company wrote. The post Google launches ecosystem of extensions for Gemini CLI appeared first on SD Times.
LaunchDarkly Accelerates Journey to Self-Healing Software with Feature-Level Observability That Reduces Developer Friction and Accelerates Triage
- NewsWire
LaunchDarkly, the comprehensive feature management platform, today unveiled key new AI capabilities and integrations that let developers ship software at even higher velocity while keeping risks at bay. These updates further the companyâs vision for Self-Healing Software that allows engineers to focus on delivering amazing user experiences instead of anxiously awaiting late-night calls to fix ⊠continue reading
The post LaunchDarkly Accelerates Journey to Self-Healing Software with Feature-Level Observability That Reduces Developer Friction and Accelerates Triage appeared first on SD Times.
LaunchDarkly, the comprehensive feature management platform, today unveiled key new AI capabilities and integrations that let developers ship software at even higher velocity while keeping risks at bay. These updates further the companyâs vision for Self-Healing Software that allows engineers to focus on delivering amazing user experiences instead of anxiously awaiting late-night calls to fix bugs and outages. Pressure has been mounting on developers to ship code faster and faster as organizations have moved from yearly to seasonal to agile deployments. And now, the rise of AI-assisted âvibe codingâ is accelerating that pace even more, increasing the volume of code shipped â and with it, the risk of shipping buggy software. Traditional Application Performance Monitoring (APM) tools, while still essential for infra and system-wide monitoring, were never built to tie the performance of the features being shipped to the underlying observability data. That has left a critical gap in observability at the release level, one that can mean costly outages and customer churn when issues go undetected. The new observability updates, which build on the companyâs previous developments from its Galaxy 2025 user conference, provide closed-loop automation for de-bugging and quality control that transform how companies approach software quality and reliability. The updates include: Live view of feature performance: Adding session replay, error monitoring and APM data directly into the rollout surface gives developers a live view into how changes are performing without needing to wait on legacy alerts or hunt across APM dashboards. This closes the loop between code change and customer impact, dramatically reducing mean time to repair and building confidence to ship more often. Regression attribution to metrics: Guarded Releases connect the dots between what changed (your feature flag) and what broke (your metric). No more guesswork, blame-pong, or digging through dashboards at 2 a.m. When a metric regresses, LaunchDarkly shows you the exact flag change that caused it and lets you roll it back with one click. Session replay: provides full context into how a release impacts real users right down to clicks, rage-scrolls, and form abandons. You can finally pair telemetry data with human-readable truth. Itâs the fastest way to diagnose what actually happens when a release behaves badly, especially when the bug doesnât trigger an alert. Upcoming AI-Powered Diagnostics: The upcoming Vega AI agent will help eliminate the traditional âneedle in a haystackâ debugging process. Vega analyzes logs, traces, metrics, and session replays to identify root causes, generate timelines of what broke and why, and surface recommended code changesâturning noisy production data into actionable insights. âWhile iterating quickly on products is paramount, organizations are becoming terrified of their own speed â they can find themselves flying blind on how their software is performing as they ship,â said Jay Khatri, Head of Observability at LaunchDarkly. âOne bad release during peak season can cost millions in revenue and customer trust, which is why weâre so focused on moving from reactive damage control to proactive confidence.â âOrganizations are now building software faster than their capacity to roll it out and manage it effectively, a trend exacerbated by the rise of AI-assisted coding,â said James Governor, analyst and co-founder of RedMonk. âLaunchDarkly has responded by investing in observability tooling to enable session replay, troubleshooting and rollbacks at the feature level, for stability with speed.â To learn more about the new observability features, register for the October 22 webinar featuring Jay Khatri, Head of Observability at LaunchDarkly, James Governor, co-founder of RedMonk, and JR Robinson, Senior Director of Infrastructure at Writer. To register, click here. The post LaunchDarkly Accelerates Journey to Self-Healing Software with Feature-Level Observability That Reduces Developer Friction and Accelerates Triage appeared first on SD Times.
Meta to donate React and React Native to the Linux Foundation
- Latest News
- Linux Foundation
- React
Today at React Conf, Meta announced that it would be donating its JavaScript UI libraries React and React Native to the Linux Foundation, which will be forming the React Foundation to support these libraries. The React Foundation will include founding members Amazon, Callstack, Expo, Meta, Microsoft, Software Mansion, and Vercel. Its executive director will be ⊠continue reading
The post Meta to donate React and React Native to the Linux Foundation appeared first on SD Times.
Today at React Conf, Meta announced that it would be donating its JavaScript UI libraries React and React Native to the Linux Foundation, which will be forming the React Foundation to support these libraries. The React Foundation will include founding members Amazon, Callstack, Expo, Meta, Microsoft, Software Mansion, and Vercel. Its executive director will be Seth Webster, who is currently the head of React at Meta. According to the Linux Foundation, once the new foundation is formed, Meta will contribute the libraries and then the new organization will provide governance, manage core infrastructure, organize events (including React Conf), and launch new programs that encourage community collaboration. âThe move to a neutral home is the natural next step in the evolution of this important open source technology to ensure React and React Native remain open, innovative and community led,â said Jim Zemlin, executive director of the Linux Foundation. âThis will be a major milestone for the open source ecosystem, and we look forward to welcoming the React Foundation to the Linux Foundation.â Andrew âBozâ Bosworth, CTO of Meta, said âWe open sourced React 12 years ago and since then the community has made it an essential part of how our industry ships better products. Today React is used by over 20 million developers to build across mobile, desktop, TVs, gaming consoles and VR headsets. The new React Foundation board will include many leaders who helped get the community where it is today, including from Meta and our partners at Amazon, Callstack, Expo, Microsoft, Software Mansion, and Vercel. Iâm excited to keep supporting the React community and making better user experiences for everyone.â The post Meta to donate React and React Native to the Linux Foundation appeared first on SD Times.
XMLUI: A modern web development tool based on a component model
- Latest News
- React
- Visual Basic
- XMLUI
Back in the mid-1990s, if you wanted to build useful software, you didnât necessarily need to be a coding prodigy. You had tools like Visual Basic, an ecosystem of components, and the ability to simply wire them together. You were, in essence, standing on the shoulders of giants â the coders who built those foundational ⊠continue reading
The post XMLUI: A modern web development tool based on a component model appeared first on SD Times.
Back in the mid-1990s, if you wanted to build useful software, you didnât necessarily need to be a coding prodigy. You had tools like Visual Basic, an ecosystem of components, and the ability to simply wire them together. You were, in essence, standing on the shoulders of giants â the coders who built those foundational components. It was a time of rapid iteration, where ideas could quickly become tangible applications. If youâre under 45, you might not fully grasp the magic of that era, or perhaps youâve noticed that the modern web, despite its advancements, hasnât quite replicated that seamless component-driven development experience. Thatâs where XMLUI comes in. It aims to bring that Visual Basic model to the modern web, specifically leveraging the React-based component ecosystem. XMLUI wraps React and CSS, providing a suite of components that you can compose using simple XML markup. Itâs about democratizing UI development, making it less of a black art and more of a structured, accessible process. Gent Hito, founder and CEO of /n software, the company behind the project, detailed the four-year journey to create it. He started this project to simplify the often overly complex world of browser-based UI development, noting that existing methods required âtoo many specialists.â âWe are back-end guys. We never build things with interfaces. We built our website. Itâs a brochure, and thatâs about it, very low tech,â Hito told CodeProject. âBut then we realized we needed to build some management UI, some simple things in general. And I had no idea how difficult it was to do that on a browser surface. These days, you need a UI guy, you need a UX guy, you need an HTML guy, you need a CSS guy, you need a JS guy.. ⊠this doesnât make sense.â When Hito showed the framework to consultant Jon Udell, he was skeptical. In an interview with CodeProject, he said, âItâs early. The people that really get it are few and far between. Most people are just â if you saw that Hacker News thread â it was mostly people just reminiscing about what XML and VB and UI mean to them. And a lot of them just, frankly, were telling us how weâre going to fail and how this has been tried before.â Hito laughed as he noted that âseveral hundred people took time out of their Sunday to post on Hacker News about how this is a bad idea, so Iâll take that any day. I think thatâs better validation than anything.â Udell said he was attracted to the project because there was a real, demonstrable need for the solution, and that it is meeting that need.. âAnd in Gentâs case, the poster child is the app that they built on XMLUI to manage one of their servers.That kind of proves the point, that a team of back-end devs, with no React understanding, no CSS, not wanting to have to go down that rabbit hole, were able to do the thing that was needed for the business. Thatâs the story that you always want to hear.â The project has struck a chord among developers, and from a blog post Udell wrote about the project, the first golden lead user was found. âAll weâre looking for right now is the next several of those, and ideally, several who are not like these guys, because these guys are React and CSS pros. But even for them, theyâre already seeing this as a thing that inclines them to rip out Retool for XMLUI, which is beyond what I think we would have even expected or hoped.â Udell went on to discuss how XMLUI facilitates collaboration with AI by enabling the creation of human-readable and writable code, in contrast to complex React or CSS code generated by AI that can be difficult to review or modify. He described having coding assistant Claude write a demo invoicing app, which it returned quickly and was âshockingly complete and functional.â But there was a lot of React code and CSS code that Udell was not competent to work with, he said. âI canât modify it. I canât collaborate with the AI at that layer,â Udell explained. âBut now that stuff is expressed in XMLUI, itâs expressed a a much higher and more declarative level, in a very small amount of markupl. And I can forget the fact that itâs XML; it could be JSON. Thatâs sort of irrelevant. What matters is that itâs concise and human-readable and writable, as well as being AI-readable and writable, so it sets up the possibility for really productive collaboration with these agent AIs.â Udell went on to say that the MCP server they have has access to source code and documentation, and that they are building out how-to examples that the AI has access to. âSo when you are in an environment where youâre trying to build something, and youâre using one of these agents to help you, the goal is that it will always find a correct working pattern to base what it does off of. So itâs less likely to hallucinate because youâve got links.â The ability that XMLUI provides to understand the code, review it, and interact with the AI, Udell said, is âthe kind of dynamic that we want to be having when we collaborate with these AIs, versus vibe coding. I tell it what to do. I donât know how it did it. The human-readable format really makes the difference there.â The post XMLUI: A modern web development tool based on a component model appeared first on SD Times.
The personal website of the one and only Joe Fabisevich, and indie developer building Red Panda Club Inc. Formerly an iOS developer working on civic integrity, societal health, misinformation, and other wild things @Twitter.
Artifacts
I. I once knew a wonderful man. He was kind, gentle, and beloved by many. He was so wise that when he graduated from college, Mahatma Gandhi himself handed him his diploma. He was a scholar and a gentleman, blessed with the unique gift of deeply knowing Sanskrit and Hindi. He spent much of his life translating scrolls filled with the wisdom of the ancients into modern language â in pursuit of preserving history.
II. When Mongol troops raided the House of Wisdom and dumped so many manuscripts into the Tigris that âthe river ran black with inkâ, Muslim chroniclers equated the loss of knowledge to a civilizational catastrophe. Many of the books in the Grand Library of Baghdad were torn apart by pillagers so their leather covers could be made into sandals â compounding the pain of an already destructive act.
III. In 1969, three astronauts planted an American flag on the moon. This event was captured on film and beamed to millions of household television sets, as people across the world shared the experience of witnessing a profound leap for mankind. Our connection to the cosmos had been reshaped â and we still talk about it decades later.
IV. Early internet phenomenons like Charlie the Unicorn, Potter Puppet Pals, and Peanut Butter Jelly Time provided endless entertainment. My older brother introduced me to All Your Base Are Belong to Us, and I showed him lolcats. A few months later, I met a kid I considered sheltered because heâd never heard of Newgrounds â so I showed him everything it had to offer, and we relived the humor together. That rhythm of discovery and sharing lasted for years, as the internet grew at a pace that felt almost unthinkable compared to my early childhood.
I. I once knew a wonderful man. He was kind, gentle, and beloved by many. He was so wise that when he graduated from college, Mahatma Gandhi himself handed him his diploma. He was a scholar and a gentleman, blessed with the unique gift of deeply knowing Sanskrit and Hindi. He spent much of his life translating scrolls filled with the wisdom of the ancients into modern language â in pursuit of preserving history. II. When Mongol troops raided the House of Wisdom and dumped so many manuscripts into the Tigris that âthe river ran black with inkâ, Muslim chroniclers equated the loss of knowledge to a civilizational catastrophe. Many of the books in the Grand Library of Baghdad were torn apart by pillagers so their leather covers could be made into sandals â compounding the pain of an already destructive act. III. In 1969, three astronauts planted an American flag on the moon. This event was captured on film and beamed to millions of household television sets, as people across the world shared the experience of witnessing a profound leap for mankind. Our connection to the cosmos had been reshaped â and we still talk about it decades later. IV. Early internet phenomenons like Charlie the Unicorn, Potter Puppet Pals, and Peanut Butter Jelly Time provided endless entertainment. My older brother introduced me to All Your Base Are Belong to Us, and I showed him lolcats. A few months later, I met a kid I considered sheltered because heâd never heard of Newgrounds â so I showed him everything it had to offer, and we relived the humor together. That rhythm of discovery and sharing lasted for years, as the internet grew at a pace that felt almost unthinkable compared to my early childhood. V. Millennials remember The Dress as a seminal cultural moment that captivated people for days, but what most donât remember are the two llamas that escaped a zoo earlier that day and caused a commotion along the way. To my recollection, this was the first viral moment that was overtaken by another so quickly that it might as well not have even happened. VI. Last week I watched a very funny TikTok. It was a meme that dominated everyone's feeds for about eight hours â but then we collectively forgot it ever existed and moved on to the next funny video. As people willfully turn themselves into content creators, information disseminates faster than ever. The half-life of any artifact grows shorter, and the societal impact of any captured moment becomes increasingly fleeting. The purpose of writing, art, and creation has always been human connection â but as the goal shifts to pleasing an opaque algorithm, the creations themselves have become less meaningful. I expect this trend will only accelerate as more of our economy becomes tied to clicks and views, and as the act of creation grows more frictionless with the proliferation of AI. Artifacts are our distillations of history â they encapsulate and preserve the human experience. Our world is built on the artifacts of those who came before us â people who cared enough to share, document, and build something that outlasted themselves. It is a spiritual experience to create something new in this world â to turn a figment of our imagination into a shareable element of the universe. I do my best to fill my days with artifact creation â shunning the incentive structures the modern world creates when I can â to tap into the essence of what drives me to create artifacts that will outlive me. I donât spend all my time creating my own artifacts; there is often more to be gained by sharing myself, so others can create their own artifacts that bring meaning to them and others. In a way, that transfer of knowledge is itself a transient artifact â just like a years-long running joke between me and my wife, a kiss we share, or a wedding vow we made, is an artifact of our love. Almost everything I consciously do is related to creating or preserving artifacts â but the artifact is only a small part of who I am. Itâs no coincidence that Iâve aligned the time I spend on earth with the way I make a living with my values. Anything meaningful is worth doing, and worth preserving. What people see is the result of hundreds or thousands of hours of work that went into creating the artifact â and the hundreds of thousands of hours Iâve lived that led to it. But an artifact can never truly capture my full experience. It is a synthesized approximation. At best, itâs a low-fidelity version of my lived reality â not the life itself, but the echo. And yet, when thoughtfully preserved, the artifact will outlive me.
You Should Feed Ducks The Good Bread
Colleen and I were binge-watching Gossip Girl last winter, and there are a few scenes where a stressed and irate Blair Waldorf wanders over to Central Park to feed the ducks. I told Colleen, "that sounds soothing â we should do that when the weather gets better". And then, as Iâm wont to do, I promptly forgot. But Colleen takes note of moments like that, and a few months later she reminded me: we should actually go feed some ducks.
Thatâs how I found myself in Central Park last weekend, feeding ducks. It was the first beautiful day of spring in New York â the kind of day New Yorkers wait for, when everyone pours out to enjoy sunlight finally breaking through after a long dark winter. We had a picnic, strolled through the park, wandered amidst thousands of New Yorkers â and of course, we fed the ducks.
My wife (who I should preface is an extremely kind, caring, and loving person) and I got into a small debate over what kind of bread to feed the ducks. I insisted on buying them a nice loaf, and she felt it was unnecessary â the ducks would happily take our scraps. I argued that buying a good loaf cost us very little but could potentially make their day. Heck, it could even possibly be the best meal they ever have. She replied that thereâs no way to know if theyâd even notice the difference â to them, it might just be one carb versus another.
Three Philosphers InterjectI bought the dang bread, and she didnât mind. I spent the whole afternoon thinking about that moment, and why it meant so much to me. In the end, I came back to three philosophers and their philosophies.
Colleen and I were binge-watching Gossip Girl last winter, and there are a few scenes where a stressed and irate Blair Waldorf wanders over to Central Park to feed the ducks. I told Colleen, "that sounds soothing â we should do that when the weather gets better". And then, as Iâm wont to do, I promptly forgot. But Colleen takes note of moments like that, and a few months later she reminded me: we should actually go feed some ducks. Thatâs how I found myself in Central Park last weekend, feeding ducks. It was the first beautiful day of spring in New York â the kind of day New Yorkers wait for, when everyone pours out to enjoy sunlight finally breaking through after a long dark winter. We had a picnic, strolled through the park, wandered amidst thousands of New Yorkers â and of course, we fed the ducks. My wife (who I should preface is an extremely kind, caring, and loving person) and I got into a small debate over what kind of bread to feed the ducks. I insisted on buying them a nice loaf, and she felt it was unnecessary â the ducks would happily take our scraps. I argued that buying a good loaf cost us very little but could potentially make their day. Heck, it could even possibly be the best meal they ever have. She replied that thereâs no way to know if theyâd even notice the difference â to them, it might just be one carb versus another. Three Philosphers Interject I bought the dang bread, and she didnât mind. I spent the whole afternoon thinking about that moment, and why it meant so much to me. In the end, I came back to three philosophers and their philosophies. John Rawls John Rawlsâ Veil of Ignorance is a thought experiment about how to design a just society. Imagine yourself behind a veil of ignorance, where you donât know your place in that society. You donât know your race, gender, class, talents, or job â and because you could end up a CEO or a janitor, youâd rationally choose to build a society rooted in fairness and decency for all. Now imagine youâre born into a world where you could be a human or a duck. Humans shape the world to their whims, while ducks hunt for scraps and take handouts from strangers. What if you were the duck â not the one giving bread, but the one hoping someone kind might offer you a piece? Thatâs the essence of contractualism. This is not a truly just world â but it is our world. Blaise Pascal Pascalâs Wager is a similar framework â but more inward-looking. Blaise Pascal argued that the only rational thing to do is believe in God. Why? If you believe in God and youâre right, you gain infinite reward: a spot in heaven for the rest of eternity. If you believe in God and youâre wrong, you lose a little: some wasted time and lifestyle shifts. If you donât believe in God and youâre wrong, you risk infinite loss: eternal fiery damnation. If you donât believe in God and youâre right, you only gain a little: temporary freedom and earthly pleasures. I think about Pascalâs Wager a lot â not just when Iâm wondering about my everlasting fate. Feeding ducks good bread isnât exactly a God-tier question, but I do often ask: what are the consequences of my choices if Iâm right or wrong about my assumptions? Itâs often worth putting in the extra effort â because maybe Iâm wrong, and thereâs more to this mortal plane than I know. Immanuel Kant And then we have Deontology â or more specifically, Kantian Ethics. Kant believed that morality isnât about outcomes or self-interest â itâs about duty, reason, and universal principles. His Categorical Imperative says: Act only according to that maxim whereby you can at the same time will that it should become a universal law. That's a philosophical way of saying Kant believed in the Golden Rule. If I were a duck, Iâd want the good bread just like any person does. And so, as a human, I have a responsibility to feed ducks the good bread. Back to Central Park Reader, I did not feed the ducks the good bread. Not because Iâm morally bankrupt â we simply did not find enough ducks to feed the giant loaf of bread Iâd stressed about all day. The few we did find got some decent sandwich bread leftover from our picnic. When I got home, I left out the remainder of our picnic for the neighborhood birds â some fruit that they seemed to appreciate. The next day, we used the good bread to make delicious sandwiches and played lovingly with our little cat. Not feeding the ducks wasnât a moral failing â it was just a coincidence. Sometimes, you need to find more ducks in your life so you can feed them the good bread. Spend some time looking for a duck to feed â you might be surprised how much it fills you up, too. Look â no one really knows how any of this works. Not Kant, not Pascal, not Rawls. But I feel pretty strongly that small acts go a long way. So when you get the chance, you should feed ducks the good bread. Metaphorically speaking, of course. It turns out that bread isnât great for ducks, and ethics are complicated â but kindness is always warranted.
A Trick For Opening Magic Links in Your RSS Reader
This blog post is documentation for a very specific problem I run into about once a year. Thatâs rare enough to forget how I solved it, but frequent enough to waste 15 minutes rediscovering the answer. And letâs be honest: itâs not the big problems that drive you mad â itâs the little ones that feel like their own Sisyphean hell.
The ProblemSome websites 1 require you to log in using a magic link â a one-time link emailed to you that signs you in when clicked. Itâs usually seamless. But on iOS, it can quietly become a headache. 2
This blog post is documentation for a very specific problem I run into about once a year. Thatâs rare enough to forget how I solved it, but frequent enough to waste 15 minutes rediscovering the answer. And letâs be honest: itâs not the big problems that drive you mad â itâs the little ones that feel like their own Sisyphean hell. The Problem Some websites 1 require you to log in using a magic link â a one-time link emailed to you that signs you in when clicked. Itâs usually seamless. But on iOS, it can quietly become a headache. 2 If you open the link in your default browser, everythingâs fine â no issue at all. But I do most of my reading in my RSS app, Reeder. When I run into a short-form RSS feed 3, Iâll tap to open the article in SafariViewController â the in-app browser you see in apps like Reeder, or my own app, Plinky. Itâs an easy way to read the full article in a browser without leaving the app youâre already in. The Problem (Simplified) I open a link to paywalled content, like the excellent tech newsletter I subscribe to: platformer.news. Platformerâs hosting provider, Ghost, emails me a magic link to log in and access the content I pay for. But thereâs no way to open that sign-in link inside Reederâs SafariViewController. If I click the link, it opens in the iOS default browser â and thereâs no way to force it to open in Reederâs SafariViewController instead. 4 I canât even copy and paste the link, since SafariViewController has no address bar. The Solution Every time I run into this issue, I end up coming back to the only solution that actually works. My favorite RSS provider, Feedbin, has a clever feature: it lets you generate a unique email address (like xyz123@feedbi.in) that turns incoming emails into an RSS feed. 5 Itâs meant for forwarding email newsletters into your RSS reader â a nice way to keep all your reading in one place. But in a pinch, itâs also a great trick for accessing any email inside your feed. I canât tell Platformer to send the magic link directly to that Feedbin address, because my login is tied to my personal email â but I can forward the email there myself. Since Iâm already subscribed to that email-based RSS feed, the message shows up in Reeder. Now I can tap the login link in Reeder, which opens SafariViewController â and just like that, Iâm signed in. đ Finally, I can read the paywalled content I actually paid for! đ Conclusion While I hope this post helps someone else out there, letâs be honest â itâs really written for me, 12 months from now, when Iâve forgotten all of this again. đ Such as all paid Ghost newslettersâ©â© . You canât even long-press and copy the link, because that will load the authentication page, invalidating the linkâs effectiveness.â©â© Some websites donât offer the full content of an article in the RSS feed. This can happen for a few reasons, but the most prominent is because the content lives behind a paywall, and this prevents free access to paid content.â©â© I did actually add a URL scheme for this in Plinky, because I know firsthand how this can be an issue!â©â© In case you donât use Feedbin there are many services which are just as good and handle this specific problem.â©â©
Lights, Camera, Action Button
The iPhone 15 Pro launched with a marquee feature, the Action Button. The Action Button set out to replace the mute switch, which had existed since the first iPhone was released back in 2007. The Action Button is a software-powered button, replacing what previously was a hardware switch that would toggle your phoneâs silent mode on or off.
The appeal of the Action Button was that now you could decide what the side button should do for you. If you wanted it to be a mute switch, no problem, the Action Button can still be one. But if you want to use it to toggle your flashlight, launch the camera, or turn on Do Not Disturb mode, these alternatives and more are now possible. The unspoken downside has always been that itâs hard to decide what the Action Button should do, if it can only do one thing.
The iPhone 15 Pro launched with a marquee feature, the Action Button. The Action Button set out to replace the mute switch, which had existed since the first iPhone was released back in 2007. The Action Button is a software-powered button, replacing what previously was a hardware switch that would toggle your phoneâs silent mode on or off. The appeal of the Action Button was that now you could decide what the side button should do for you. If you wanted it to be a mute switch, no problem, the Action Button can still be one. But if you want to use it to toggle your flashlight, launch the camera, or turn on Do Not Disturb mode, these alternatives and more are now possible. The unspoken downside has always been that itâs hard to decide what the Action Button should do, if it can only do one thing. There are ways to set up the Action Button to show a menu of actions, but that makes the Action Button less convenient. You can use hacks like Federico Viticci's MultiButton, which made it possible to assign separate tap and double-tap actions to your Action Button. These workflows and many others were built off of the Action Buttonâs ability to run a Shortcut, but none of them ever stuck for me. While you may get a more powerful Action Button, you also get a more complicated one â trading off flexibility for fiddliness. Then after 18 months with the iPhone 15 Pro, I had a breakthrough. This idea came to me in a dream (literally, not metaphorically), and last month I discovered a way to use the Action Button that is useful across apps, without tying myself to one action for the Action Button. Our First App-Specific Action My most common use case for the Action Button has always been to save links from my clipboard to Plinky, the link-saving app I make. You may be thinking, âPlinky has a share extension which lets you save links from any app, so why do you need the Action Button to save links from the clipboard?â Because the app I send and receive links from most, Messages, surprisingly does not have a share button. Before mapping the Action Button to a âSave Link to Plinkyâ Shortcut, whenever a friend sends a link to me over iMessage I would have to go through multiple steps to save a link. Copy the link into my clipboard, open Plinky, and save the link manually. This was tedious, especially if I wanted to save multiple links, so I decided to take advantage of Plinkyâs built-in Shortcuts support to create a one-tap action, letting me save links from anywhere. At the same time, this action is mostly useful to me in Messages â it's not as important when I'm in an app like Safari or Slack. This meant we were back to where we started: there's this really useful action mapped to my Action Button, but I don't need it all the time. That got me thinking, why can't I create one dedicated action per app, which I can use to build a more flexible Action Button? An iOS 18.2 Interlude iOS 18.2 brought a very important change to Shortcuts, one that made the technique Iâm about to share possible. There is a new Get Current App Shortcut block, which unsurprisingly, will tell you what app you currently have open. Once I was able to figure out the current app, it became possible to conditionally run actions based on the current app. Action Mode Now that we have all of the necessary pieces, we can build out our custom App-Specific Action Button workflow. (Iâve called mine Action Mode.) Letâs walk through this image step by step, to see whatâs happening here. Itâs a bit long, but I promise itâs not scary. We set our Shortcut to receive input from the Share Sheet. This allows us to run this Shortcut from an assortment of locations. If there is no input, Iâve chosen to fallback to the Get Clipboard function. We create a bunch of if blocks, one per app. This will allow us to run a different action based on the result of Current App. Youâll notice there are no else blocks, weâll get to that in a little bit. When Current App is equal to an app weâve chosen to have a Custom Action, we now run said Custom Action. That Custom Action can be anything, it doesnât even have to be related to the current app weâre in. (As you see in the example when the Current App is Messages.) You can create as many Custom Actions for as many apps as youâd like, the only limit is your imagination. At the end of the list we have a Fallback Action block. This is an optional action to run if press the Action Button outside of an app with a Custom Action. Iâve mapped it to adding a Reminder to my Groceries list, an action I take often. Note: The way we know to enter the fallback action block is a bit unconventional, and is predicated on the hasRunAction variable. Instead of having infinitely nested else blocks for every Current App check, we populate the hasRunAction variable whenever we run a Custom Action. This is slightly more error prone because we can forget to populate the hasRunAction variable, but leaves our Shortcut significantly more manageable and flexible. Long Live Apps The great thing about this system is that itâs easy to get started, and easy to add new apps to your workflow the more ideas you come up with over time. I started off with a handful of apps, but now itâs grown to over a dozen. A few examples for how Iâm using my App-Specific Action Button: Plinky: Open my Reading List App Store: Open the App Updates tab - Via URL scheme: itms-apps://apps.apple.com/updates Castro: Toggle the sleep timer (so I can listen to a podcast while falling asleep) ChatGPT: Start a new ChatGPT Voice conversation Craft: Open my Plinky Roadmap doc Fantastical: Switch to my Personal Calendar Set GitHub: Open the repo of Boutique (my most popular open source project) Ivory & Mona: Generate a screenshot of the Mastodon post in my clipboard Messages: Save the a link from my clipboard to Plinky MyWater: Log 12 ounces of water Photos: Open the Favorites album Safari: Open Reader Mode Slack: Switch to my Red Panda Club Slack - Via URL scheme Things: Open my Today list These are just my workflows, but the possibilities for configuring your Action Button are personal and limitless. If you need a template to start with, here is the Action Button Sample Shortcut I created, identical to the screenshot above. Now all thatâs left to do is to assign our Action Mode Shortcut to the Action Button, which we can do in the iOS system Settings. And viola, thatâs the App-Specific Action Button system â some would call it a Shortcut â to a better workflow.
Introducing Plinky: My Love Letter To Links
The post below was written by me, originally featured on the Plinky blog.
To celebrate the launch of Plinky you can get 50% off of a yearly subscription by redeeming this offer: plinky.app/offer/REDPANDA
The post below was written by me, originally featured on the Plinky blog.
To celebrate the launch of Plinky you can get 50% off of a yearly subscription by redeeming this offer: plinky.app/offer/REDPANDA
There are few words I've ever said more excitedly than these: I want to tell you about my latest app, Plinky.
Plinky makes it incredibly easy to do something we do every day, save links for later. You may already have a way to save links, I know I've tried every method under the sun, to the point where I decided to build my own app. That app is Plinky, and today it's available to download on the App Store. Over the last 18 months people have been loving Plinky, because it fixes the same problems I ran into when I've tried to save links in the past.
The post below was written by me, originally featured on the Plinky blog. To celebrate the launch of Plinky you can get 50% off of a yearly subscription by redeeming this offer: plinky.app/offer/REDPANDA There are few words I've ever said more excitedly than these: I want to tell you about my latest app, Plinky. Plinky makes it incredibly easy to do something we do every day, save links for later. You may already have a way to save links, I know I've tried every method under the sun, to the point where I decided to build my own app. That app is Plinky, and today it's available to download on the App Store. Over the last 18 months people have been loving Plinky, because it fixes the same problems I ran into when I've tried to save links in the past. I tried filling up Apple Notes with links to come back to later, but that proved to be a disorganized mess. I tried leaving browser tabs open with links I wanted to look at later, but the links I needed later got lost amongst the dozens of tabs I had open for right now. I even tried spending hundreds of hours using apps dedicated for reading articles later, but not all links are for reading. All of that led me to build Plinky, a home for your links. Why I Built An App To Save Your Links I love links. I love reading links, I love it when people send me links to funny videos, I love collecting links, and I love sharing links. Something else I love is my fiancĂ©e Colleen. Shortly after meeting Colleen discovered how much I love links, and I'm very grateful that she found that to be an endearing trait. While she loved that I was thinking of her and sharing interesting articles, recipes to cook together, or cute animals I knew she'd like, she quickly found it distracting to have her phone buzzing with all the links I'd share while she was working. She suggested saving the links for later, so we could look at them together at night. Thatâs when I started working on a small project, a home for these links, an app that would become Plinky. I started to show people the app I'd made for the two of us, and they loved it. They were all saving links they needed but were unhappy with one thing or another. They kept telling me that they wanted an app like this for themselves. A few months later when I left my job at Twitter to start my own company building personal, playful, productivity apps, I decided that the app I needed to build for all of these people was Plinky. Plinky became more than an app to Colleen and I, it became an essential tool in our lives and a ritual. Sometimes Colleen would save a cute picture of a red panda for me, sometimes I would save an activity we could do next weekend, but every night the two of us would come together and share the links that we'd saved for each other. To this day we still have plink time, where we lay in bed and show each other the links we've for each other. Links come in all forms. An article just isn't the same as a Tweet or Instagram post you want to show a friend. A YouTube video or TikTok won't even open in an app dedicated to reading. Many apps like LinkedIn or Facebook have their own bookmarking systems, but I've wasted hours bouncing from app to app trying to remember where that link I saw a month ago but need now is. As Iâve built Plinky I've heard people tell me about the myriad of techniques they have for saving and storing links. There are an unending amount of organizational methods people use to reference links they might need later, but none of those perfectly fit into one app. The experience of listening to Plinky's beta testers led me to three realizations: It needs to be easy to save a link, the simpler the better. Saving a link shouldn't take work, it needs to be one tap, so you can easily save a link. People's workflows benefit from having a universal inbox for their links. It should be easy to organize and find links you've saved for later, otherwise why are you even saving them? People aren't always looking for an app dedicated to reading articles. The diversity of links people save means many people are looking for a tool that helps them do anything with the links they've saved, a Swiss army knife for their links. This is where Plinky comes in. Plinky: Saving Links Made Effortless Plinky makes it easy to save a link for later. You can save links from anywhere with just one tap. Whether youâre on the iPhone, iPad, or Mac, in your favorite browser like Chrome, Firefox, and Safari, or want to build personalized workflows using integrations like Zapier, Unread, Shortcuts, or Plinky's API, saving a link for later couldn't be easier. Plinky is highly customizable. You can make the app look, feel, and behave however you like. The links you save are very personal, as is what you want to do with them after they're saved. People have all sorts of workflows, so itâs a core goal of mine to make Plinky easy enough for my mom to use (she really is a big user), yet something you can infinitely tweak to make Plinky a perfect place for your links. Plinky has organizational features like Folders, Tags, Search, and Pinned Links. Each of these provides an opportunity to mold Plinky to your needs, and makes it very easy to find your saved links later. What's Next For Plinky I'm continuing to build upon these three pillars: ease, customization, and organization. Over the coming months Plinky will have the ability to: Add timed reminders for your links, for the purpose of reviewing a link in a certain time or context. Import your links from services like GoodLinks, Raindrop, Pocket, and other places you may already have your links saved. Create Secure Folders, giving you a space for more sensitive links protected by TouchID and FaceID. Indulge in an elegant reading experience that lets you customize fonts, sizes, and background color for the ideal read it later experience. Enjoy a native Mac app, one that takes the beautiful design Plinky already provides on iPhone and iPad and makes it feels right at home on the Mac. Have more ways to build personalized workflows around your links. It's easy to tag a link, but it should always be getting easier. Pinning links helps you prioritize some links over others, but the best experience will be even quicker. Plinky's search is powerful but it can get even more helpful by searching every word of a webpage you save. A Favor, If I May If you've made it this far I want to say thank you to you for caring so much about the app I've made. Plinky is the result of more than 1,500 hours of work. If that doesn't emphasize how much of my heart, sweat, and tears are in Plinky, I'm not sure what will. I'm truly excited about the idea of spending another 1,500 hours making Plinky better for you, then 1,500 more hours after that. If I may ask: Please download Plinky and try the app. I would love to know what you love about Plinky, what can be improved, or what I can do to make Plinky an integral part of your life. If you like the app, I would greatly appreciate a rating or review. Ratings decide the success or failure of apps the App Store, and as much as I hate to ask, I would like to succeed because that means I get to continue building an app that strives to make your links better. Tell your friends. I hope it's not too gauche to ask, but I genuinely believe that Plinky can help improve many people's lives. Every person who tries Plinky may be a person who's now using an app that makes their life a little better, and nothing in this world makes me happier getting a chance to help others. Thank You While you can say that I built this app, nothing in this world happens alone. I've had a lot of help along the way, and want to make sure anyone who's helped even in the smallest way is honored in Plinky. Plinky has a Thank You section on the About Red Panda Club screen where I've assembled everyone who's helped me while I've been building Plinky. The rest of this post was written for me and my loved ones, but youâre free to read it as well. I would like to first mention my wonderful fiancĂ©e and soon to be wife Colleen. Thank you for supporting me with your love, your advice, your daily feedback, and the hours of hands on help you provided over the last few months in creating and solidifying Plinky's launch plan. I love you, and because of you my dream is now a reality. Thank you to my family for cheering me on this whole time, without you I wouldn't be the person I am today. Plinky is a personal expression of who I am and the way I want to help make the world a slightly easier to navigate place. If it wasn't for you I wouldn't have that perspective, so thank you for instilling those values in me. Thank you to Joey Banks for being a great friend, for our weekly calls, and for your help in shaping Plinky's App Store screenshots and website. Because of you I was able to show the world what I've built in the way that reflects what I wanted people to see. While I'm here bragging about my friend, if you're looking for the world's foremost expert in Figma, you should reach out to Joey. Thank you to all of the beta testers who have provided endless ideas, feedback, and guidance over the last 18 months. Plinky wouldn't look or work the way it does today without your amazing ideas, and of course your extremely helpful bug reports. If you've provided a suggestion during Plinky's beta then odds are it's already been implemented, or is on the Plinky roadmap I've constructed for the next year and beyond. Thank you to anyone who's helped me over the last few years, whether or not that involves Plinky. The last few years haven't been easy, and while I can't always be strong, I can't always be brave, what I can always be is grateful and appreciative. Now if I may leave you with a few words: Be good to others, help those in need, and save a few links.
The Reasoning Computer
The Turing test is dead, and we killed it. The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. From the 1940s 1 to the 2010s people programmed computers, and computers could only do what they were programmed to do in a rules-based deterministic manner. Sometimes a person would program the computer and it would do something unexpected, but 100 out of 100 times the computer was doing what it was programmed to do whether the person liked it or not. While there has been experimentation with what today we call AI since the 1950s, those machines were a long ways away from passing the Turing test.
Why does using ChatGPT feel more like a conversation with the smartest person you know than a computer? It's because ChatGPT doesn't solve problems deterministically the way a programmed computer does, it solves them probabilistically. 2 ChatGPT demonstrates the ability to think about something in a logical, sensible way, the definition of reasoning. 3
We've created something completely new here, a reasoning computer. 4
The Turing test is dead, and we killed it. The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. From the 1940s 1 to the 2010s people programmed computers, and computers could only do what they were programmed to do in a rules-based deterministic manner. Sometimes a person would program the computer and it would do something unexpected, but 100 out of 100 times the computer was doing what it was programmed to do whether the person liked it or not. While there has been experimentation with what today we call AI since the 1950s, those machines were a long ways away from passing the Turing test. Why does using ChatGPT feel more like a conversation with the smartest person you know than a computer? It's because ChatGPT doesn't solve problems deterministically the way a programmed computer does, it solves them probabilistically. 2 ChatGPT demonstrates the ability to think about something in a logical, sensible way, the definition of reasoning. 3 We've created something completely new here, a reasoning computer. 4 Working With A Reasoning Computer There are so many political, societal, economic, and ethical implications of Large Language Models (LLMs), 5,000 words wouldnât be enough to cover all those thoughts. (Trust me, thereâs a much longer post sitting in my drafts.) But whatâs really captivated me is why a reasoning computer really is different than anything weâve used before, a conclusion I could only arrive at through experience. ChatGPT has been an essential tool for me over the last month, especially over the last week as I've been building Plinky's browser extension. I'm a very experienced iOS developer but have little experience with web development. I know enough TypeScript and React to cobble together something with lots of help and guidance, but it will take me much longer than someone who knows what they're doing. A browser extension is important for Plinky to be successful though, which presents a unique challenge: I know what I want, I know how to describe it, I don't quite know how to get it, but I will know when ChatGPT gives me the wrong answer so with some nudging I can get what I'm looking for. Here's why the process of pairing with ChatGPT works, and how it helped me build a fully functional browser extension that lives up to my standards in less than a week. (With far less frustration than if you took away the tool and gave me a whole month.) A simple browser extension to save links to Plinky's database is a much smaller problem than building a whole app. The problem is self-contained, which makes it quick and easy to test ChatGPTâs results and see if the output matches my expectations. In fields like mathematics or computer science it's generally easier to verify a solution's correctness than come up with a solution in the first place. I may be a novice web developer but I'm a great programmer. Even in a domain where Iâm not comfortable I can describe the problem I'm trying to solve, assess whether a solution is good, do some research (on my own or with the aid of Perplexity and ChatGPT), and nudge the reasoning computer in the right direction. This isn't a process where I ask for something and am given exactly what I want, but I can promise you it's much easier than becoming a good enough TypeScript developer to build the high quality browser extension I want. Little by little the browser extension looks and works more and more how I want it to be, until it does exactly what I want it to do. The whole process is interactive so Iâm learning about how to get to the right solution. Not only do I have what I want, but this iteration made me a better web developer, I started off only knowing what the wrong output looks like but now I also know how the correct solution should look. This is just one example of how I was able to accomplish something I previously wouldn't have been able to do thanks to an LLM, the number of tasks I turn to LLMs for is growing every day. The same way that GPS becoming ever-present means I haven't opened a map in almost two decades, I find myself turning to ChatGPT or Perplexity rather than opening Google and clicking a bunch of links to find answers. I used to do my own research, I used to be the reasoning machine, but now I'm offloading more and more of that work to Large Language Models. How Can A Reasoning Computer Even Work? People will say that ChatGPT can't do math, and that's true in the most literal sense. A Large Language Model may not know what addition and subtraction mean to a human, but it can synthesize the correct results to add and subtract numbers better than a person. Similarly people point out that ChatGPT can't read, because it's just a stochastic parrot that means it can't provide intelligible output. It's true that LLMs are complex statistical models, yet despite ChatGPT not knowing English from Urdu the way people do it's still capable of translating from English to Urdu to Russian to French in a way that I never would be able to. The fact that Github Copilot 5 doesn't actually know the difference between JavaScript and Swift hasn't stopped it from making programmers 55% faster at coding. Large Language Models use a different form of problem solving that starts with inputs and extrapolates technique. That's the reverse of how humans believe they develop their skills, if you study hard, read a lot, and put in enough hours as a writer you too can become the next Faulkner or Shakespeare. But think about the way you first learned your native language, you listened and watched the world around you for 1-2 years, then reverse-engineered how the technique works. We're reasoning machines too, the difference is that the entirety of the internet wasn't preloaded into our brains the way it was into an LLM. (For the best, I don't know if you know but there's some bad shit on the internet.) When we say ChatGPT can't do this or ChatGPT can't do that what we're doing is anthropomorphizing flaws onto the system, derived from our own experiences of solving problems successfully. The problem solving process may be difficult for people to understand because this is the first computer that doesn't do exactly what you tell it to do. Our intuitions may view this as a flaw, but OpenAI loading the whole internet into ChatGPT and creating a simple model for how to think rather than directly programming the machine is the reason this computer is incredibly useful in new and previously unexplored ways. Simon Willison says that these tools make you more ambitious with what you can accomplish, and I'd like to build upon his axiom. When you have a reasoning computer you only have to know what the wrong result looks like, not how to get the right result, and that alone has the power to change how society solves problems. Ada Lovelace deserves credit for writing the world's first computer program 100 years before ENIAC, but in this context I'm using the timeframe of the 1940s to focus the post on generally programmable computers.â©â© It's perfectly fair to debate whether this is how the inner-machinations of ChatGPT work, but I feel very strongly that at a minimum you can say this about the output ChatGPT provides.â©â© This isnât because ChatGPT is sentient, but in all likelihood because it was trained on a corpus of human-generated data. It's difficult to define "thinking" in this context, my personal view is that there is no thinking without sentience, but in this context what I call thinking isn't the low-level internal machinations of ChatGPT, but one level higher â the step by step token output process that people using ChatGPT see in the process of getting their result.â©â© I'd like to co-credit Joe Ugowe with coining this term, it stemmed from a wide-reaching discussion we had last night about our experiences with ChatGPT and Large Language Models.â©â© Github Copilot is a Large Language Model product like ChatGPT, but trained with a coding-specific focus, which allows it to be integrated into a whole suite of Microsoft's programming-related tools and platforms.â©â©
The Present Should Be Signed
When I wrote The Future Will Be Signed almost six years ago the latest in AI advancements was Google Duplex. If you're like me and have never used Google Duplex, it's a feature of Google Assistant that could make calls on behalf of a person and automatically perform a task, such as booking restaurant tables. While you may have never heard of Google Duplex there's a good chance you've used a generative AI tool like ChatGPT, Midjourney, or GitHub Copilot.
AuthenticityWeâre going to need a way to prove the authenticity of a piece of digital content, everywhere, in a simple manner. This is where public key cryptography comes in. Our current solutions are noble efforts, but remain too complex.
It's quite an understatement to say that AI has come a long way since 2018, and yet the blog post's core thesis is even stronger today than when it was written. At the time I was concerned about a future where deepfakes, audio manipulation, and text generation spread across the internet. We're now living in the beginning of that future, this is our present. It has never been faster or easier to generate inorganic content, the tools to do so are more usable and accessible than ever.
AI already has us questioning what we see on the internet, and the problem isn't going away. Fake news articles are being written by ChatGPT, fake books are being written with ChatGPT, and of course fake reviews made up by ChatGPT are being used to sell all of this.
When I wrote The Future Will Be Signed almost six years ago the latest in AI advancements was Google Duplex. If you're like me and have never used Google Duplex, it's a feature of Google Assistant that could make calls on behalf of a person and automatically perform a task, such as booking restaurant tables. While you may have never heard of Google Duplex there's a good chance you've used a generative AI tool like ChatGPT, Midjourney, or GitHub Copilot. Authenticity Weâre going to need a way to prove the authenticity of a piece of digital content, everywhere, in a simple manner. This is where public key cryptography comes in. Our current solutions are noble efforts, but remain too complex. It's quite an understatement to say that AI has come a long way since 2018, and yet the blog post's core thesis is even stronger today than when it was written. At the time I was concerned about a future where deepfakes, audio manipulation, and text generation spread across the internet. We're now living in the beginning of that future, this is our present. It has never been faster or easier to generate inorganic content, the tools to do so are more usable and accessible than ever. AI already has us questioning what we see on the internet, and the problem isn't going away. Fake news articles are being written by ChatGPT, fake books are being written with ChatGPT, and of course fake reviews made up by ChatGPT are being used to sell all of this. Trust This infrastructure is going to have to be baked directly into the software that developers build, in a way that is transparent to the end user. A politician (or anyone) needs to be able to sign a tweet, audio recording, or video clip to prove authenticity of what they are saying. With the creation and fabrication of content being so easy, weâre going to need a model where the person creating the content can prove it is trustworthy, and otherwise it should be treated as inauthentic. When I worked on Twitter's Societal Health team I spent a lot of time thinking about misinformation, disinformation, abuse, harassment, and civic integrity. These issues often took the form of coordinated inauthentic behavior by large groups of people trying to manipulate people and the public conversation. The scale of the problem seemed enormous, now it's larger than ever, and only getting bigger. We still need tools to help us differentiate authentic and inauthentic behavior or content, but there haven't been many meaningful efforts to build authenticity into the products people use. Arguably the largest advancements have come from a technology I personally have few positive feelings about, cryptocurrencies. When you believe everyone is an adversary then you need to build systems for trust. Bitcoin, Ethereum, and other crypto projects have shown that you can build a system based on public key cryptography that ensures a sense of truth. You may not like what that truth is, and it's easy to do so because of all the "Web3" that have been hilariously misused and abused in a seemingly unending amount of ways. I'm not pinning my hopes to the blockchain solving our trust problem, but I appreciate that much better user experience paradigms for trustless systems have emerged over the last five years because they were necessary for crypto to succeed. Scale In some ways the problems are actually worse than ever. Anyone can buy verification on X Twitter and impersonate their favorite brand. People have grown hostile and are treating platforms as adversaries because platforms no longer care about the people using their product. Platforms are even stealing usernames from active users, how can anyone trust what they read online when they donât know whoâs writing it? Platforms are treating their users as adversaries as well. If you get locked out of your Google account you might as well consider your digital life gone. A company like Google doesn't and can't scale support to the level of personal help we've historically been accustomed to in common society. Protecting user safety means support agents must assume that someone writing them for help is a scammer, fraudster, or hacker trying to break into someone else's account. The incentive structures for helping people are all backwards because the risk of Google turning over someone's Gmail account to the wrong person far outweighs the positives of helping thousands of people. This may only affect 1/100,000 people, but when you're that 1 person, losing your entire digital identity is horribly destructive experience. People need a sense of trust, some shared truth, and we're still in search of that online. As more of our lives happen on an inherently untrustworthy internet the status quo becomes more and more untenable, something has to give. Things will either get better or they will get worse, and based our approach of trying nothing and being all out of ideas, they are likely to get worse. The guardrails are coming off the system, if we wait too long then trust in our systems online and offline may fully erode. It's discouraging that we can't figure out a way to solve the problems we have today, but an even bigger repudiation of the status quo is that we don't even talk about this large systemic risk, and probably won't until it's too late.
Displaced And Disillusioned On The Internet
Sometimes I think about all of the societal issues I cared about 10 years ago: privacy, government overreach, and platform accountability. More and more as I look at the technological landscape it feels like none of that care seems to have mattered.
When I worked on the Societal Health team at Twitter I had a voice and direct impact on these matters. If the feedback I received and still hear from my peers and higher ups holds true, that voice was not only well-received but considered thoughtful and led to meaningful change. With hindsight I can't say that every choice we made was right, there are few easy answers and no correct answers, but we made the best decisions we could with the information we knew at the time. These days I build indie apps like Short Circuit and Plinky for many reasons, it's what I think will make me truly happiest right now, but one of my reasons is to carry less emotional weight on my shoulders. I still have my morals, but without having the position I once had itâs harder than ever to translate my values into change.
The self-induced immolation of Twitter has caused a schism with the community I formed, cultivated, and connected with on the platform. People I follow, people I learn from, and many people Iâd call my friends have all scattered to the four winds across a myriad of text-based social networks. Iâve spent the last week thinking about whether to join the latest Twitter-esque social network Threads, where many people close to me have made their way.
Sometimes I think about all of the societal issues I cared about 10 years ago: privacy, government overreach, and platform accountability. More and more as I look at the technological landscape it feels like none of that care seems to have mattered. When I worked on the Societal Health team at Twitter I had a voice and direct impact on these matters. If the feedback I received and still hear from my peers and higher ups holds true, that voice was not only well-received but considered thoughtful and led to meaningful change. With hindsight I can't say that every choice we made was right, there are few easy answers and no correct answers, but we made the best decisions we could with the information we knew at the time. These days I build indie apps like Short Circuit and Plinky for many reasons, it's what I think will make me truly happiest right now, but one of my reasons is to carry less emotional weight on my shoulders. I still have my morals, but without having the position I once had itâs harder than ever to translate my values into change. The self-induced immolation of Twitter has caused a schism with the community I formed, cultivated, and connected with on the platform. People I follow, people I learn from, and many people Iâd call my friends have all scattered to the four winds across a myriad of text-based social networks. Iâve spent the last week thinking about whether to join the latest Twitter-esque social network Threads, where many people close to me have made their way. Twitter was an online home for me, and by working there I laid many of the bricks that stood up that home before new ownership decided they wanted to go Nero on it. Now as I contemplate a home on Threads I know I won't have the opportunity to speak up and create impact that aligns with my morals. Meta's long and sordid history of causing societal problems is well known. It's a helpless feeling to trust such important relationships to the whims of a historically opaque and soulless platform, but sadly it's now an option I'm considering. Meta has often been a destructive force to the values I care about. When I look at the state of technology it's depressing to see how blatantly corrupt the technologies we use have become. Maybe they always were, perhaps it was youth and innocence that led me to believe that such large, powerful, and impactful institutions could be guided and reformed, but from what I saw firsthand there was a glimmer of hope that it could be fixed by people from the bottom up. I'm torn between my values and wanting to stay connected with the people I care about online. Twitter changed my life by connecting me to people that still show me kindness, give me joy, and make me a happier person. But Joe, Why Don't You Just Use Mastodon Or Calckey Or Some Other Weird Thing That Federates Across The Fediverse? Facebook aside I've had moral qualms with using Google products, so I've long used alternative services. Instead of heading to google.com I open kagi.com, and instead of Gmail I use Fastmail. These are actually quite good substitutes, in many ways I prefer these two options over Google's billion-user products. Similarly I haven't missed Facebook at all since I stopped using it 10 years ago. I have no problem switching away from free problematic products to pay for a less troublesome alternative. Since a Musk-sized dagger ripped apart my community on Twitter I've wandered around looking for my people. I've joined Mastodon, Bluesky, and countless other alternatives, all with their own benefits and sets of tradeoffs. I strongly believe in an open and federated social layer for the internet, letting people use any social network they want and staying in touch with their friends and family the same way I was able to switch email providers. I don't ever again want to be caught in this situation where I've lost my community again, and the promise of federated platforms is that you can move around from platform to platform with your followers and people you follow. Mastodon has been a haven for more technical nerds leaving Twitter. (Nerds who I love, especially since much of the iOS community has moved over there.) I have a core group of people that I enjoy spending time with there, but the second anything I say escapes that circle of people the environment gets much worse. I receive so many low-value context-free replies, often completely misunderstanding what I was actually trying to say. People who don't know you can be pushy and aggressive with their views, some basically ignore what you post instead opting to spend 500 characters telling you what they think regardless of whether you asked. I constantly feel like I'm being talked at, not talked to, and it doesn't feel like a place that can be my home. Beyond the community it's discouraging to see how slow Mastodon's pace of development has been. I know first hand how hard it is to build a global-scale platform, and it's especially hard to build a large platform with only a few people and community funding. I have nothing but respect for Mastodon's approach, especially as they build the service in a standards-abiding way that can work with other ActivityPub-based services. And yet the platform still lacks features that Twitter has had for over a decade (ahem, global search), and has not become the Twitter substitute I'd hoped it would become. [Very] Technical Solutions For Social Problems I don't want to spend all my time ragging on Mastodon, I applaud their efforts and appreciate how thoughtful their team is. On the other hand Blueskyâs decision making is not what I would call thoughtful, but what they have managed to do was capture lightning in a bottle. Many of Twitter's most entertaining posters migrated to the Bluesky, leading to hilarious antics such as people threatening to beat Matt Yglesias with a bunch of hammers. (I wish I was kidding, but I do have to admit this was the first time a Twitter alternative actually felt like Twitter.) Unsurprisingly a community that leads with hammers hasnât been very good at making Black users feel safe. I donât think it requires having worked at Twitter trying to minimize harassment to feel empathy, and you donât need a big empathic nerve to feel for people being told that the death threats theyâre receiving are "hypothetical death threats" and wonât be removed from the platform. To Blueskyâs credit theyâve intentionally kept the network small, passing up opportunities to scale and seize an opportunity knowing they canât make everyone feel safe right now. I feel conflicted about Bluesky, but where my hope for Bluesky lies is in what the platform is built upon, the AT protocol. AT solves important problems related to account portability and data ownership by relying on the concept of a Personal Data Server (commonly called a PDS). We live in a world where people say that platforms should both moderate more and less content, that you should have free speech guarantees unless itâs harmful, there is never going to be a right answer about what content should be allowed on a platform. This is why Facebook gets backlash, itâs arguably why Elon Musk bought Twitter, and itâs why every big tech company gets called in front of Congress. Currently your Facebook data is locked on Metaâs servers, which means if you donât like Metaâs speech or harassment or governance policies, you canât leave without losing your community. If a person owned their data then they would be free to move from platform to platform, looking for a home that suits them without losing connection to the greater community. The Bluesky team arenât looking to build a platform, theyâre trying to build the protocols that underly any social platform. By building a common protocol across the web people will be able to build and find platforms that suit them, and then can choose to (or choose not to) interact with people on other platforms through this shared language. The Bluesky team are protocol developers through and through, and I think their approach of building these fundamental primitives is the wise choice. The countless moderation failures of Twitter, Facebook, and others, show that ultimately someone has to make an often impossible decision about what content should be visible to users. These companies cannot make all the right decisions at a global scale, there is often no right decision, and despite my past work trying to help make Twitter safer, I don't believe itâs possible to create a set of rules that can make everyone happy. A better approach is to bring these decisions closer to the user. Mastodon does this at a community level, rather than some random contractor being paid an unfair wage halfway around the world make decisions about what content should be moderated on Twitter. On Mastodon the person who runs your communityâs server (often with the help of admins) will decide whatâs right for their community. You can join a community that seems great at protecting trans people, but over time you learn that they donât do a great job of removing pro-Nazi content. Now weâre right back where we started, and you have to decide whether to find yourself a new home. This is why building primitives into the protocol and doing it right is important. Bluesky recently shipped a feature that lets users build custom timelines, letting you add or remove any content you want from your feed, for example, nazis.1 If done correctly platforms, developers, and individuals will be able to build and use tools that can be used for creating your own personalized experience rather than one homogenous "right" experience. Will this work? Well that's a big question and what I don't trust the Bluesky team to do though is to build a healthy social network where people feel safe to share their thoughts. And why should I? They assembled a small team to build a protocol, that was and is their stated mission, but have ended up falling ass-backwards into owning a social platform that people want to join. An 800 Pound Gorilla Walks Into The Room There are many things Mark Zuckerberg doesnât understand, for example why anyone would want to have legs in the metaverse, but if thereâs one thing Mark Zuckerberg understands itâs social. You can read that as a compliment about his operational skills or you can imagine him as the digital version of Phillip Morris, but he has an almost gut-level understanding of what people want and how to give it to them. Threads has managed to sign up 100 million users in 5 days, in large part by bootstrapping their growth off of the Instagram network. There's also been a large collective of unhappy Twitter fiends (such as my friends) who are looking for any alternative to a Twitter run by Elon Musk. They're willing to forgive Mark Zuckerberg for his sins if he can lead them to salvation, which says a lot about how Twitter has fallen in their eyes. According to celebrities like Dane Cook the first week on Threads has felt like some drugged up parallel universe of Twitter. Post by @danecook View on Threads I don't expect this high to continue. Meta still has to retain and grow the Threads user base for it to be meaningful, but it does seem that unlike other niche Twitter alternatives Threads will at least give Twitter a run for it's money. What gives many people in the fediverse hope is that is Meta building Threads atop the open ActivityPub protocol. (The interconnected ActivityPub servers powering Mastodon and other services is called the fediverse, and yes, no matter how many times I hear it I still cringe.) By doing so Threads content will be available to anyone in the fediverse, and fediverse users will be able to interact with their friends on Threads. This works exactly the same way email does, if you have a Gmail account you can still email your grandma with her ancient AOL account. Some cynical people in the fediverse assume that Meta won't actually stay true to their promise of federating, but I think it's actually in their best interest. This is what Ben Thompson has coined a Strategy Credit, where you get credit for doing something widely considered to be good but is also beneficial to you. By federating with ActivityPub servers Meta will have access to content across the entire social web. Federating with ActivityPub servers is less about Mastodon and more about a service like WordPress which host nearly half of the internet's content. Imagine having a pipeline to all of that great content and applying Metaâs machine learning models to those posts so their users can see the best of the best across the web. This would make Threads a premier destination for content across the web, and Meta wouldnât have to deal with angry publishers or content moderation since itâs not their content. This sounds like a dream for Meta, and at the same time may end up being beneficial to the open web. Open Questions Muddling A Federated The Future Let's say Threads really does federate, and they even act in good faith doing so. There are still many open questions about how life outside of the Meta ecosystem will look. I won't go into all of them, but to provide a choice few: What if Meta's search deprioritizes people on other servers for safety or quality reasons? It makes sense when you have a platform that serves over 100 million people that there will be many spammers, state actors, and bad people. The best way to tackle these issues is to gather insights from data, and third parties may not provide that data. If running your own server means being deprioritized, even with good reason, it may mean being cut off from my community at any point in time. What if Metaâs algorithm adds more weight to posts from Threads users higher because they have more insight into their users? It's much easier to operate on first-party data than it is on third-party data, especially since it seems like Meta is taking the privacy expectations Mastodon users have somewhat seriously with Threads. If my friends aren't actually seeing what I post then will they really feel connected to me? What if Meta builds good features that donât translate to ActivityPub? For example on Threads you can control who replies to you (a feature I desperately need given my experience dealing with random Mastodon users), but since it's not a part of the ActivityPub spec that feature isn't available to Mastodon users. This isn't nefarious, it's perfectly reasonable. Meta's goal is to provide their users the best experience they can so they feel safe to come back and engage with the platform (and ads). If I trusted Mastodon with my online experience I would miss out on a lot of features like this, features that in all likelihood would make my online life better. Being on a different server may end up feeling like being worlds apart from my community, which is exactly the problem that brought me here. I hate to say it because of my moral questions, but the world is an impure place and I think I trust Meta more than Mastodon or Bluesky or whoever to listen and build a product that better connects me to my community. Where Do We Go From Here? I've been spinning my wheels for a week and I'm not any closer to answering the big question: do I choose my values or connection to my community? I love philosophy and many philosophers would tell me to live my values, those are by definition the choices I have to live with. But maybe the lesson is that thereâs no purity here, I'm living in a world with many variables I can't control, and as much as I want to have it all it doesn't seem like I can. I believe the same things I once did, but it all seems harder, messier, and more difficult to rectify. Perhaps it always was, perhaps this is wisdom and that was naĂŻvetĂ©, or perhaps it will get better. Only time will tell, and until then I still donât have my online dream home. Appendix: Building My Ideal Home There is still a very good chance I donât end up using Threads. Owning my data is still a very high priority, and I want to have my own space on the internet without Meta sticking their tentacles into it. When you use an ActivityPub-based service like Mastodon the choices your server owner makes are the rules you abide by. If my server owner doesn't want to enable local search (as mine doesn't), then I'm out of luck and can't even search my own Mastodon posts (as I currently can't). Because of that I donât plan on continuing to use macaw.social as my Mastodon instance long-term, I would prefer to have control over my experience. The most straightforward option is to run my own Mastodon server. As noted earlier I can't say the Mastodon experience is very good, and I don't particularly want to tie my online identity to the Mastodon community. As explored in the amazing essay Paradigm Shifts For The Decentralized Web, Bluesky's concept of a PDS solves the ownership problem much like running my own Mastodon server would. A PDS would allow me to host my own data and you can let anyone I want access to it. A service like Meta can have a peek into my data for the purposes of showing it to users on Threads, but they wouldn't be able to follow me all across the web based on my data and usage habits since I wouldn't be using the Threads app. Since I find Bluesky's technical solution to be the best match for my personal desires I'm leaning towards setting up a PDS once it becomes easier, and using the AT protocol with a bridge that lets me interoperate with ActivityPub servers. There are still many questions with a setup like that though. It would be a piece of infrastructure I have to maintain, the most far along project Bridgy-Fed still has many open questions and, and I don't know how it would work in practice. It's like I'm both living through the early internet, waiting to see what emerges, and spending the rest of my day the internet circa 2023. I think custom feeds are brilliant and truly necessary, though I may be a little biased because giving users the ability to construct custom timelines was the last project I was working on at Twitter before I left the company.â©â©
Year Of Focus
To set expectations for you my dear reader, this blog post was written for me, not for you. It's very long (quite long), but I'm still proud of it enough to post for the world to read.
The frigid days of December are often unbearable in New York City, but those same freezing temperatures combined with the slow down of work, life, and everyone's collective desire to rest up after a long year afford plenty of opportunity to sit and reflect. At the end of every year I start to think about what I'd like the next year to look like, and then I set a theme for the upcoming year to help me make those ideas become a reality.
Yearly Whats?A yearly theme is explicitly not a resolution, but a guiding principle you can look to over the next year. A yearly theme shouldn't be too specific, otherwise you could just craft a resolution, and it shouldn't be so broad that anything could fall into that theme. I've borrowed the idea of yearly themes from the Cortex podcast, where they discuss at length what yearly themes are, and how they approach their own themes.
"We are what we repeatedly do. Excellence, then, is not an act, but a habit."
â A quote commonly misattributed to Aristotle
What I like to do for my yearly theme is to look at a part of my life that's stopping me from being the person I want to be, and then work backwards to figure out what ideas, practices, and habits I can adopt to become that person.
New year new me, right? Wrong. I treat yearly themes as a way to build upon the work I did in the previous year, always striving to become more the person that I want myself to be.
To set expectations for you my dear reader, this blog post was written for me, not for you. It's very long (quite long), but I'm still proud of it enough to post for the world to read. The frigid days of December are often unbearable in New York City, but those same freezing temperatures combined with the slow down of work, life, and everyone's collective desire to rest up after a long year afford plenty of opportunity to sit and reflect. At the end of every year I start to think about what I'd like the next year to look like, and then I set a theme for the upcoming year to help me make those ideas become a reality. Yearly Whats? A yearly theme is explicitly not a resolution, but a guiding principle you can look to over the next year. A yearly theme shouldn't be too specific, otherwise you could just craft a resolution, and it shouldn't be so broad that anything could fall into that theme. I've borrowed the idea of yearly themes from the Cortex podcast, where they discuss at length what yearly themes are, and how they approach their own themes. "We are what we repeatedly do. Excellence, then, is not an act, but a habit." â A quote commonly misattributed to Aristotle What I like to do for my yearly theme is to look at a part of my life that's stopping me from being the person I want to be, and then work backwards to figure out what ideas, practices, and habits I can adopt to become that person. New year new me, right? Wrong. I treat yearly themes as a way to build upon the work I did in the previous year, always striving to become more the person that I want myself to be. Yearly Themes Past 2022 (Year of Trial & Error) 2021 (Year of Building Foundations) 2020 (Year of Pushing Boundaries) 2019 (Year of Creativity) 2018 (Year of Stabilization and Independence) I'll zoom in on 2022 in a personal appendix at the end of this post, so let's start in 2018. If you read these themes from oldest to newest, a story emerges. I was reeling from a divorce, and my #1 priority was to stabilize my life, and re-learn who I am outside of my marriage. I spent much of 2019 working on projects not because I thought they would bring me fortune, fame, or acclaim, but because they scratched a creative itch. By putting out those projects and seeing that they resonated with people just as they were I grew the confidence to push my boundaries, sharing more and more widely, delving into things that I previously would have been too scared to try. In 2021 I decided it was time to start thinking about a future where I don't work at Twitter, and instead pursue my dreams of being an indie developer. So in 2022 I leaned into the fact that being an indie developer would mean making many mistakes as I built a business around my own software, and that meant a lot of trial, error, and being ok with the mistakes I make. Year of Focus I managed to accomplish so much in 2022, but most importantly I was able to focus on my health after a very difficult 18 months. Working at Twitter was extremely difficult near the end of my tenure because of deteriorating health, and also a loss of motivation due to the related externalities. But while I was still working at Twitter I was spending nights and weekends working on Plinky. I used the positive initial response and support from friends and loved ones to quit my job and focus on building apps full time. The personal appendix discusses this with a lot more in depth, but in terms of my yearly themes, this year I'm working to my reign in the worst of my habits, my distractibility. I love to dream big, and letting my creativity run wild has taken me to some new and interesting places. I think it's something that sets me apart from many developers, and has allowed me to build some things that truly resonated with people in expected and unexpected ways, which is something I don't want to lose. But it's also important to set a few goals this year and make sure that I get them done, this is the difference in knowing how to do everything, and having the time, energy, and wherewithal to do everything. I have to draw some lines and make some tough choices about what fulfills me to ensure I don't spend my days working on an app, but instead am building an app that's the cornerstone of a successful business. Apps, Apps, Apps (And Business) Ship Plinky to the App Store Ship 2 major feature product updates Stretch goal: Ship a smaller app to grow Red Panda Club My top priority in 2023 is turning my software into an indie apps business. I spent four months last year working on Boutique and Bodega, and while I didn't intend to make money from them, I really appreciate the people who sponsor my open source work. Those two libraries are the foundation of my first indie app, Plinky 1, taking care of pretty much any state management one needs in a modern iOS app. I've built and shipped many apps before, but none that I was working on full time with the intention of building a business on top of. It turns out you can't make money as an indie developer until you have a product people can pay for though, so my success in 2023 starts and ends with shipping Plinky to the App Store. Shipping is just the beginning. The day you launch is closer to the first day of the journey than the last, which means there's still a lot of story left to be written once Plinky is in the App Store. I have a backlog of features I'd love to build as long as the Grand Canyon, but it's more important to listen to my users about what they want than to trust my gut. All of the feedback I received after launching Plinky's public beta was incredibly helpful. There was so much positivity and excitement, people told me so many things they wanted to see, and of course they found plenty of bugs. A sign that I'm building a product that will resonate is how many of the features people requested were already on the roadmap. There's no substitute for people using the product and telling you what they want, so rather than building out everything I think will resonate I'm making sure that I ship the minimum viable [polished] product, and then will build more features afterwards. Hopefully it will be more than two features, shipping regularly is an important sign of commitment to my users, but I'd like to set expectations low given how many other things there are involved with building a business. I don't expect my stretch goal to come to pass, but it's valuable to keep in the back of my head. I'm building an app with the code I write, but I'm also building a company. My dream is to build personal playful productivity apps that help people be the best versions of the person they want to be, but it's pretty rare that someone predicts exactly how they get to their dream. I keep a doc of "Interesting Business Ideas", ideas that I hope to implement, building a business around my values. It has idealistic goals like my Red Panda Promise, a plan to donate 5% of my profits every year to offset climate change and help preserve red pandas. It has practical ideas like giving away a free month of Plinky Pro to people who send bug reports that lead to me fixing problems or feature requests I implement, because they should be rewarded for helping me build my dream. And of course there's something I'm already doing, working in public and giving away as much knowledge as I can so other people can start business centered around their own novel ideas. Shipping a second app would give me more of a playground to experiment with these ideas, or to even potentially work on a project with a partner. My dreams don't stop at me working solo, they're about building things that matter to me and resonate with others. Working with the right person is such a wonderful feeling, so if I find the right person I might consider bumping one of my smaller projects to the front of the queue, something we can build together on nights and weekends. The reason this is a stretch goal though is because taking my eye off the ball and splitting my attention is clearly antithetical to my Year of Focus. I would only consider shipping a second app if the right circumstances emerge, but you never know. Simple But Effective Steps Towards Working More Effectively Use flow sessions per day to focus on my work Plan out my work tasks in Craft Make sure my todo list is under control Be less reactive through better batching Some days I wake up and can't wait to begin working. Other days I'm so distracted by every possible distraction that I have to conjure up the will of Thor to start my work day. I spent much of 2022 letting my creativity guide me to what I should work on next, and it resulted in quite a fulfilling year. I often found myself in research mode, which led to me creating Boutique and Bodega. I spent a lot of time prototyping really interesting ideas, resulting in some truly unique experiences for Plinky. And while I was able to get Plinky out to users in my public beta, it feels like I haven't had to "work like an adult" in a while. Even my work environment at Twitter was very reactive, and led to me forming plenty of bad habits for accomplishing deep work. The scope of my work will continue to narrow by necessity the closer I get to shipping Plinky to the App Store, and that's where practicing good work habits will become crucial. I've never lacked motivation, but my lizard brain gets distracted very easily. The Unreasonable Effectiveness Of Just Showing Up Everyday discusses how once you get started, staying in the flow is much easier, it even becomes quite an exciting state that you strive to reach. Over the final month 2022 I started using the app Flow to, well, stay in my flow. Flow is a pomodoro timer, with a twist. Flow lets you denylist apps and websites, so when I try to open Slack, Tweetbot, or Messages during a pomodoro session, it will just close the window automatically. Having a hard boundary allows me to stay in my flow, now when I tab over to check out what's happening on Slack I'm no longer distracted, instead I'm greeted with a reminder to stay in Xcode until Plinky finishes building. I've picked up another good habit in the last two months, I've started to plan out my day and my week in Craft. As cliche as it is for a software developer, I always try to bite off more than I can chew. Having a list of tasks I'd like to accomplish provides me enough perspective to see how much I really am trying to do, and keeps me on track when I see something else I want to do like spend 30 minutes tweaking an animation. Planning out my week gives me a good overview of just how much I'm really trying to accomplish, because it's easy to lose sight when you're only focused on that day. And it's also nice to be able to look back and see all that I accomplished, like a little gold star from my teacher at the end of a productive week. I've always been a todo list person, but by the end of 2022 I found myself overwhelmed with 50 tasks to do on my todo list, which was clearly not actionable. My list was filled with everything from urgent chores to stray ideas I didn't want to forget to long-term plans. I spent two days at the end of the year to find each idea their right home, whether it was in Things, Craft, or in the mental dumpster. Anything that isn't actionable in the short term has a better home in long-term storage like Craft, it's not necessary for me to wading through my entire future all the time when what I really have to do right now is pay the electric bill. Most people operate their lives around the standard work calendar. But as an indie I have an inordinate amount of freedom to shape my day, including working later into the night when my mind is ramped up and operating at it's best. I consider that a real blessing, but the bad habits of working reactively that I picked up working at Twitter aren't good for my personal life either. I constantly feel like I'm behind and playing catch up, or need to respond to my friend the same way I'd be tempted to respond immediately to my boss. But I don't have a boss, and my friends definitely aren't the boss of me. The solution is pretty simple, but took some time for me to see. I should just not do that, instead of responding immediately I can designate 10am and 6pm as the time I look at emails, find 10 minutes in the middle of the day to respond to some friends, and make reasonable exceptions when there's some urgency or necessity to respond quickly. I expect these boundaries to not only make me less distracted, but will also make me more effective and will enable me to do better work. Energy More morning or mid-day meditations Rest one day on the weekend, for real Figure out where I can cut some social obligations Plan for no more than three weekday social obligations per week My day is dictated by time and energy. We all have the same amount of time, but we have differing amounts of energy at different times. There are many things I can change in my life, but I'd benefit most from focusing on how I apply my energy, and where I apply it. I've meditated every day for the last five years, it's one of my best habits. I always feel better after meditating, but when I'm at my busiest my daily meditation will often slip into the evening or possibly very late into the night. When I'm my least focused and most frazzled is when I need to take a step back and meditate, so it's important for me to stay conscious, pause during the day, meditate, and regain my energy. It's also important for me to take a real day to rest, something I often have trouble doing. I'm a constantly curious person, so historically I've found myself constantly solving problems and puzzles until eventually I burn out and need a real break. It's important for me to avoid that trap, and to take a day off every week to make sure I can recharge. Creativity needs room to breathe, and that day off comes with other perks. It will give me space to be excited for that next day of work when I do get to let my mind loose on something I want to solve. Life is about more than just work, and I need time to live life and celebrate the opportunities I have. This year I did a much better job of putting down the computer and spending time with the people I love, and I intend to keep doing that. As for the second day most people take off every week, I tend to work six days a week but with a more relaxed schedule, scattering for chores and obligations that most people accomplish on the weekend. This year I'll use one of my days off to do the personal and work planning I mentioned earlier, maybe scratch a creative itch, prototype an idea I have, or spend the day with family and friends if I'm feeling particularly social. Your energy is bound by what you do, but it's also determined by what you choose not to do. 2022 was a year filled not only with work, but a lot of socializing, travel, and quality time with loved ones. As an introvert though I've found my energy running really low, and often feeling like I can't be my best self for the people in my life. Too often I've treated the time I have to work as a gift because it lets me go into goblin mode behind a computer, and that's a signal that I need to find a way to stop feeling so overwhelmed and stretched thin socially. It's incredibly hard to do, but I need to figure out where I can cut some of my social obligations. The ability to connect with friends across the world thanks to messaging, video, and airplanes has made me feel more connected to them than ever. Every week I have a few FaceTime calls, and I stay in touch with people over iMessage and Slack all day. I took four major trips to see friends in other parts of the world this year, and it was incredibly fulfilling. Spending time with people in person made me realize how much is missed over a monthly FaceTime call, and I don't have a good answer about how to fix that. But it does make me think, even if it isn't easy maybe it's worth trying to move those calls back from every four weeks to every six or eight, to reclaim some of that social energy I've been giving away. I need to get that energy back somehow, and limiting myself to three social obligations per week (including FaceTime calls) feels like a reasonable balance between keeping up with friends and running at a constant energy deficit. For much of 2022 I had the flexibility to work around people's schedules, but now that I'm working full time and more focused, I've become bound to the same constraints as everyone else. It's far from my favorite thing to do, but I have to be responsible and set hard(er) limits, staying vigilant about my energy, so I have energy to spare. One thing I'm not willing to give up is my time with Colleen. We have our weekly date night, time together on the weekend, and dedicated space for each other every night before bed. We spend plenty of time together besides that, and even though I have space to be my own individual, I'm always happy when I'm with Colleen, so spending time with Colleen has to stay a priority. Tradeoffs Improve my sleep regimen Shorter journal entries to reclaim my time Figure out where I can cut my information intake More bad news, as an adult I have to start setting better boundaries. I don't want to go to bed at 1am every night anymore, it makes me feel like I'm not a functioning member of society. I always get my 8 hours of beauty rest which means I'm often crawling out of bed at 9:30 or 10. Every time I've tried to be a morning person it's failed, but I should be able to wind back the clock to 12 or 12:30, or gasp, maybe even falling asleep before midnight. I spend quite a bit of time journaling every night before I go to sleep, and along with meditation it's one of my most treasured activities. It's worth the time I put into it, but it can also be really daunting, especially as the last thing I do before going to sleep. In an effort to preserve my memories I've found myself adding more and more details to my journal entries. The more detail I add the longer a journal entry becomes, which means spending more time journaling every day. That time adds up, and it adds up fast. I've noticed my journal entries ballon from 400-600 words per day to 800-1200, which means spending 20-30 minutes writing rather than 10-15. While I find the longer journal entries better to read afterwards, it's a lot of time to give up. I intend to reclaim my time by focusing in on the important events of the day, adding flourishes and details to capture my memories as accurately as possible, rather than recapping everything that happened that day, down to taste of the bagel I had to start my morning. An evergreen reason I find myself constantly feeling behind is because I've done this to myself, I am constantly behind all the content entering the firehose of my phone. There's always one more thing to read, a show to watch, a podcast to listen to, and all this content comes in all day every day. I'm always happy to indulge myself with something else to consume, but I've noticed that when I have free time my mind really enjoys wandering off. I don't need to put on a podcast or some music when I'm walking around, the world is so rich and textured, I should enjoy the full experience of my surroundings. I'm very happy when the content runs out, so why shouldn't I give myself more of that? All I've been missing is permission. I'm giving myself the permission to drop things. I've unsubscribed from a bunch of podcasts, I've removed a bunch of RSS feeds, I've unfollowed a lot of people on Twitter (and Mastodon), and I'm giving myself permission to put down books I don't like rather than begrudgingly finishing them. There's a central motif underlying my yearly theme of focus: my time is precious, so I should focus on what's important. Appendix: 2022 I'm really proud of how I leaned into my Year of Trial & Error in 2022. I accomplished so much by deviating away from my tendency to make safe choices, having always feared the impact of making a mistake. I sold myself on 2022 being a year where it was ok to make mistakes, and to show myself that I can recover from any mishap. Not only did I come out ok, I had a great year where I accomplished so much. But I also learned a very valuable lesson. When you down the âwrongâ path, it often ends up being the foundation for another more formative path. When you make reversible decisions you can always walk back down the road you came on, and take another path from there. This time though you'll be armed with knowledge and experience that you didn't have the first time, letting you make better choices. You can even walk back up the same path you came down, succeeding this time because you're prepared for what lies ahead. Beautiful, amazing, wonderful things can happen when you do something you're unsure of. This year I did one of the least safe things I could imagine doing, I quit my job at Twitter to pursue my dream of being an indie developer. I've been dreaming of building a business around my apps, and in 2022 I took a big leap forward. I'd been preparing mentally, emotionally, and financially for the day I decided to leave for over a year, but that didn't make it feel any more practical. I left on a Friday, had a blissful weekend, and on Monday I woke up to frantic texts from friends asking and telling me about Elon Musk acquiring enough shares of Twitter to become the top shareholder. I didn't know what I was going to do that day, but I was very glad that this news didn't directly affect me anymore. If I had stayed at Twitter for one extra day I would have had a completely different perspective about leaving rather than all of the positivity I had around my departure. The lesson here is that no matter how safe or unsafe you feel making a decision, you always only have a small subset of information. No one can predict the future, and the future is more vast than the present, so worrying and trying to optimize every detail isn't a recipe for success. You can only control the parts of any major decision that you have direct control over. Contrary to the chaos that ensued at Twitter over the coming months, I spent my first months of self-employment following my creative impulses, and they took me to some amazing places. I won't spend too much time going over every one of them because this is the personal part of the post, and I know what they mean to me. Instead I'll list them off, with some details in case one of them catches your eye and you feel like taking a deeper look. I built two very successful open source projects in Boutique and Bodega. These projects even led to me being interviewed by the GitHub team. (Hi mom and dad, I know you don't really get what I do but I'm happy you listened to the podcast and were very proud. đđ»ââïž) Boutique and Bodega serve as good foundation for creating MVCS, an architecture for building SwiftUI apps that's gotten some traction and helped me a lot. I needed an image-based API for my Boutique demo project, so I built one that serves images of red pandas. This was a fun example of what I could do because I'm now working for myself, there's no immediate business value to having spent 2 days learning Cloudflare Pages, R2, and a whole bunch of other technologies, but given my company is named Red Panda Club I'm sure I'll find plenty of fun uses for this going forward. đŠ I rebuilt and redesigned my personal website, fabisevi.ch, something I've had on my todo list for nearly a decade. I created build.ms to supplement Red Panda Club. Red Panda Club is where my apps and products will live, build.ms is where I'll be sharing the lessons I learn building Red Panda Club, with a focus on engineering, design, product, business, and more. I wrote six blog posts across those two websites. Coding As Creative Expression, Reflections On An Open Source Project, Goodbye Fellow Tweeps, Designing A High Signal Interview Process, Model View Controller Store: Reinventing MVC for SwiftUI with Boutique The Best Twitter Bio, The Humble Tweet I put my feelings out there by spending the day before my birthday writing about all the nice people in my life. And last but definitely not least, I launched a public beta of Plinky, my first indie app. [I'll do my best to not pitch you again in this blog post, but maybe sign up for the beta? I make you an empty promise that you'll like it.] I did a lot more than that of course. I read 15 books, a whole bunch of meaningful articles, traveled around the world to meet friends old and new, and of course spend a lot of time with my friends, family, and girlfriend here in New York. Those all spurred much reflection and insight, but to explain how they fall under trial and error would take far too many words. Now we're going to get really personal. By far the biggest trial I dealt with was a mysterious neurological disorder that caused me indescribable pain for the last 18 months. People who are close to me know about it, people who know me from afar may have some allusions to it, but in 2022 I finally received a diagnosis for my mystery ailment after seeing dozens of doctors and specialists who couldn't figure out what was wrong with me. Thank you to my neurologist for diagnosing my cervical dystonia, which allowed me to start getting treatment for the condition. In most cases it's incurable, but since my condition has a good chance of being linked to trauma, I may be able to heal over time. I've seen a lot of progress and pain reduction not only from the quarterly botox injections in my neck and medication I'm taking, but also from the work I'm doing in somatic therapy to address the underlying trauma. There were many errors along the way, I tried so many different things to feel better. Every day there was a new and slightly different pain, and every day I would try to figure out what was happening. Somatic therapy has not only helped me face the trauma that's causing my brain to malfunction, but to learn how to listen to what my body is telling me that my brain won't allow me to hear. But I persevered. I knew I couldn't live like that, I love life far too much to give up. I love Colleen who's been there for me every day, and I love my mom who's been there to listen to me as I've gone through so much. I love my brother and my dad who have their own sense of how I'll get better, and are supportive no matter what. While every day still carries some difficulty, it's easier than it was six months ago when I finally found my doctor and somatic therapist. It's easier than it was three months ago when I started botox injections. It's easier than a two months ago, one month ago, one week ago, all of which are times I made majors breakthrough in somatic therapy. None of this would have been possible if I hadn't quit my job. I was suffering physically every day, and even with the generous time and space away from work that Twitter allowed me to focus on my health, I couldn't focus on on my health enough to see progress. I needed this time in my life, I needed this space, I needed to feel like this year was a year dedicated to me. I said that quitting my job was one of the least safe things I could have imagined doing, but sometimes doing what's safe isn't the same as doing what's right for you. Having learned that lesson, I consider my Year of Trial & Error a tremendous success. Plinky is an app that lives between a link utility and a bookmarking app, with some very novel collaboration features that I haven't seen on any other app. I think there's an interesting space for managing links that don't really fit into the mold of Pocket, Instapaper, Matter, or other apps that focus on reading. I'd love it if you checked out here or signed up for the beta. đâ©â©
Coding As Creative Expression
Is coding a science or an art?
â Matthew Garlington (@dejadu13) May 21, 2022
Is coding a science or an art?
â Matthew Garlington (@dejadu13) May 21, 2022I've seen many versions of this question posed over the years, and to Matthew's credit it's a very good question. As you can see in the replies people translate their lived experience writing code and answer art or science based on however they conceptualize and practice programming. A few years ago MIT conducted a study that concluded "reading computer code is not the same as reading language", answering the question of whether coding is art or science with a rigorously documented "both". While I'm hard-pressed to argue with science, I'd like to provide a different answer, one that's a little more conceptual.
Is coding a science or an art? â Matthew Garlington (@dejadu13) May 21, 2022 I've seen many versions of this question posed over the years, and to Matthew's credit it's a very good question. As you can see in the replies people translate their lived experience writing code and answer art or science based on however they conceptualize and practice programming. A few years ago MIT conducted a study that concluded "reading computer code is not the same as reading language", answering the question of whether coding is art or science with a rigorously documented "both". While I'm hard-pressed to argue with science, I'd like to provide a different answer, one that's a little more conceptual. The instinct to qualify coding as art comes from the practice of programming. While writing a program software developers make many choices, ones that rarely have hard and fast rules. As long as your program compiles and runs as expected, you can make any choice you want. It's all human input, if you ask 100 developers to write a complex program they'll write it 100 different ways. Heck, if you ask one developer 100 times they may write it 100 different ways. Code can create something beautiful, enable an amazing experience, and people will even call a piece of code elegant or exquisite. To someone in the arts this all sounds very familiar. The instinct to qualify coding as a science comes from the output of a program. Developers desire deterministic results for their programs, when you provide an input to a program you expect to receive the same output every time. The practice of programming itself can be imprecise, and the same way that science operates in unknown space, building a program can often feel the same. Developers will try to minimize the ambiguity by using industry-tested practices, much the same that a scientist uses a standardized beaker rather than throwing some chemicals in an unclean cup and seeing what happens. When put together that truly sounds like the practice of science. What we've found ourselves asking is whether code is a paint brush or a calculator, and frustratingly the answer appears to be both. Despite the fact that programming looks like art and looks like science, I still think there's a concept that better fits the practice of coding, creative expression. We can look to writing as a point of comparison. Sometimes it's hard to believe that the most beautiful poem uses the same medium as boring technical documentation because the artifacts look and feel so different, but we all know it's true. The medium doesn't impose limitations on how someone can choose to express themselves, it's a tool for creative expression. With writing it's impossible to separate the medium from the artifact created, and the same is true for code. Code can be artistic and create something new in the world, or it can simply exist to accomplish a task. Code helps people capture the meaningful moments of their lives, lets you carry the beauty of every song ever recorded in your pocket, but it's also the boring spreadsheet that helps you run your business. So is coding an art, a science, or creative expression? I say coding isnât science, it's not art, itâs not quite a craft, but a malleable form of creative expression. When given an infinitely flexible and manipulable canvas people use their imagination to create wondrous things, ones not limited to simple categorization.
One Last Visible Change, Goodbye Fellow Tweeps
Hard to believe it's over. My time at Twitter wasnât perfect but it was incredibly special and there's little I would change about it (though less crypto would be cool). I was able to provide constant feedback about products across the entire platform and the entire organization, work on some of the most pressing digital societal health issues of our time, including the 2020 US presidential election, and help launch numerous products to minimize abuse and harassment. I never once felt like I couldn't advocate for the concerns of the sometimes thoughtful sometimes bonkers people who use Twitter, providing a voice for people who don't get to have a voice inside Twitter.
There's so much more I could say but instead I decided to post the going away email I sent to a thousand or so people at Twitter.
Hard to believe it's over. My time at Twitter wasnât perfect but it was incredibly special and there's little I would change about it (though less crypto would be cool). I was able to provide constant feedback about products across the entire platform and the entire organization, work on some of the most pressing digital societal health issues of our time, including the 2020 US presidential election, and help launch numerous products to minimize abuse and harassment. I never once felt like I couldn't advocate for the concerns of the sometimes thoughtful sometimes bonkers people who use Twitter, providing a voice for people who don't get to have a voice inside Twitter. There's so much more I could say but instead I decided to post the going away email I sent to a thousand or so people at Twitter. I was really heartened by plethora of thoughtful, meaningful, and overwhelmingly kind responses I received, enough so that I decided to publish it publicly with minimal edits to provide context or clarifications such as the fact that Visible Changes is an internal mailing list for new products shipping at Twitter since there are so many teams working on so many different projects that it would be impossible to keep up with everything happening at Twitter without subscribing to the Visible Changes mailing list. If you're curious about what I'll be up to now that I no longer have a job, feel free to check out this thread, unsurprisingly on Twitter. Howdy friends, colleagues, and strangers (sorry for the email strangers, feel free to send this straight to the archive!). You may recognize me from my occasional long-winded Slack messages so it should come as no surprise that I decided to squeeze in a 1,300 word going away email with a linked Google Doc1 of feature requests before I leave. After 3 years, 10 months, 3 days, and one pandemic at Twitter, my last day at Twitter will be April 1st. (Yes I recognize that me writing an org-wide email joking about quitting on April Fools Day would be incredibly on-brand for me, but I assure you itâs true.) Iâm so proud to have worked at Twitter, and I forever will be. I'm incredibly grateful for the work I've been able to do directly, influenced indirectly, and most importantly so thankful to the people I've met and worked with along the way. (Maybe the real friends were the coworkers I made along the way â those of you not in the strangers category will appreciate this.) I love this company, I love the people, I love using all the latest and greatest experiments in Earlybird (so please remember to bucket me into all the good experiments). I want to stay connected with many of you, now and beyond, so please donât be shy about throwing some time on my calendar or reaching out to me by other means over the next two weeks or after â Iâll always make the time. To get the first question everyoneâs been asking out of the way, Iâm not going to work anywhere else. Itâs definitely not Facebook (lol, Meta or whatever they wanna be called), and never will be. Iâm taking some time off (2, 3, heck maybe 6 months off but donât worry Iâll still be tweeting) to recover from whatâs been a very taxing year physically and emotionally, focusing on some health issues that Iâve been dealing with. But what Iâm really looking forward to is wandering around the streets of NYC this spring, riding around on my bike, and enjoying time with family, friends, and loved ones. Twitter has been the job of a lifetime, but right now the last thing I want is a job, so I guess Iâm just going to not have one. Since most of society has to exchange their labor for capital in some manner after Iâm refreshed and rejuvenated I do intend to do the same, but working for myself again like I did before Twitter. This time around Iâd like to try my hand at building playful and creative indie software products, tools oriented around helping people leverage technology for their personal needs, with a matching company that combines my love for teaching others and helping people reach into their hearts to derive the true value of what they can do. Iâd be remiss if I didnât say I learned some of the valuable lessons I'll be using and sharing from my time here at Twitter, and will do what I can to reach others through writing (longer than tweets) so they too can benefit by learning from others learning. If youâd like to know more about what I'll be up to by all means please reach out and Iâll be happy to share. Iâm sure you'll appreciate that this is where I say the best way to keep up with how itâs going is by following me on Twitter @mergesort, with indie development updates and red panda gifs @redpandaclub. I'm always looking for feedback, the sense of community I built here is something I wouldnât trade for the world and will miss dearly, I'd love to keep as much of it as possible. And because I trust you all with my personal email you can always reach out to me at [nice try but you don't get this random blog post reader]. I want to say thank you to every team I worked with (aka bugging every feature team to build my pet idea â some of them were actually good!) and worked on, Communities, Communities Health, Twitter Dev, and Notifications. But a special shoutout is reserved for Health, and that goes even further for the team formerly known as Health Client. We built some incredibly meaningful things on Societal Health and beyond, but more importantly we built a team of amazing people on the foundation of empathy, caring, and curiosity. To those people Iâve had the pleasure of working with on that team, I donât say this lightly but I love you like family. The work you do isnât only about helping Twitter today, itâs some of the most important work for keeping Twitter an important part of the world in 5, 10, or 50 years from now. But today Iâm still at Twitter, and as a parting gift in the spirit of our company value fearless honesty Iâd like to leave a few thoughts Iâve had bouncing around in my head over the last few months while thinking about the unique and wonderful place Twitter has been to work. Twitter isnât the biggest platform in the world, but its effect makes it the most consequential tech company in the world. The world takes its cues from Twitter, and because of that we should find ways to get more users by opening Twitter, not closing it off from the open web. When we move away from Twitterâs open nature weâre losing a bit of Twitterâs service to the world, I hope we remember that in everything we build. Let your values guide you. The success of Twitter isnât DAU, DAU is a lagging indicator that shows weâve succeeded in building a good product. Weâve heard a lot about our DAU goals and we should shoot for them, but please donât let the ends become the means. Never stop looking for paper cuts. If you look closely youâll see more of them happening more frequently as the system we build becomes more and more complex, and the expression death by a thousand cuts has resonance for a reason. Health isnât a fixed goal and it isnât a lever we can pull up and down depending on where weâre focusing our energy. As we gain users and build new surface areas Health problems will only become exponentially more difficult in unpredictable ways so please donât view Health as something that can be balanced with user growth. Lean into and invest in the infrastructure the Health org has built and turn expansion into newfound success, striving to build a better and healthier social network than any of our competitors. Twitterâs culture is unique and one of a kind, please donât lose it. Especially as the company grows itâs easy for culture to dissipate. New perspectives are incredibly important, donât be shy about integrating new tweeps and their ideas, but for those of you who have been here for a while your job is also to teach whatâs made Twitter so special that people all over the world want to join and leave their mark. That culture only continues to be world class if we help new tweeps know about it. Unfortunately over the last year Iâve seen a lot of newer tweeps across the entire company struggle to feel like they understand what theyâre supposed to be doing, and thatâs not their fault. Every personâs job involves doing the work they came here to do, but a part of that is setting every new tweep up for success. Keep helping new tweeps succeed, so look at a person to your left, look at a person to your right, look at your Google Meet screen, and remember that you only succeed when they succeed. Iâll sign off with a few words @Kayvon once said that have stuck with me since â you are absolutely right Joe. âđ» P.S. Thank you Kayvon for always being a good sport the 945 times Iâve used this clip to make a joke, all the best on your parental leave! P.P.S. Bothering yâall while I still can, hereâs a list of features and ideas I would like to see be built. Consider this one last dump of practical and reasonable ideas Iâd love to see Twitter build since I wonât be able to bother people after April 1st, 2022. (You donât want to see the list of impractical ideas I have saved.) A quote tweet redesign that de-emphasizes the original content to detract from the nature to dunk, rendering more like an organic reply thatâs visible on your timeline, and the ability to de-index your tweets from search as discussed in Slack here. (We've talked about de-linked quote tweets on the Health side before, and I'm all for it. I've also wanted a similar feature for search. It would be useful to give users the power to delist their tweets from search, that way we could prevent people searching through people's old tweets and dogpiling them for years-old comments that may not be reflective of today's norms, but still allowing those tweets to live on a user's profile so the original author can choose to surface/resurface them as desired. I guess a better way of putting it, allowing users to opt out of letting their tweet(s) be publicly indexed for search.) Third Party Verification. Twitter should be the central hub for identity on the internet, and we can get a step closer to there by letting people authenticate with third parties such as YouTube, Instagram, Tik Tok, Github, etc, and have those destinations displayed on a userâs profile without resorting to hacks like LinkTree or this. Fix open graph tags. I cannot tell you how many iOS users would rejoice if you could play videos in tweets without leaving iMessage, or at least see that a tweet contains a video with a little play button over the image rather than rendering a static thumbnail from the video. If you send a quote tweet to someone it shows up as some text and a link so users sometimes donât realize itâs a quote tweet. And showing the date of the tweet would do well to help lessen the spread of outdated information (which can become misinformation). Please donât ship an edit button, but do ship Clarifications. (go/clarifications) Timeline sync, the way third party clients such as Tweetbot implement it. I would love to leave my Latest timeline on one device and pick it up on another device, that way I donât lose my place. This can be pretty simply done by sending down a âsync cursorâ, and would likely bring a lot of fanfare and users over from third party clients. A slight redesign to the composer to make it feel more WYSIWIG. Reading in the context of a tweet makes it easier to catch mistakes so looking at a live preview as youâre composing it not only should look better, but should hopefully reduce the rate of errors and typos in tweets. Tweet digests. Follow the best tweets from a person in a day, being able to set custom criteria. Show me the top five tweets from a person in a day, only show me tweets with 50 likes or more from people I follow, etc. Thread marker. Instead of users manually writing 1/25 to signify the first of a 25-tweet thread, since we know how many tweets are in a thread we should show a little bubble on the tweet in the top right corner that says (1/25). Itâll save people precious characters and let threads grow in size without having to know how long theyâll be upfront. Searching my bookmarks and likes has gotta be some of the lowest hanging fruit at the company and has been built in hack weeks multiple times, can we please ship that? While weâre at it can we expose filter:follows filter:nativeretweets in search so users can find tweets theyâve seen (including injections) without having to remember such a wild query? đ„ș Please stop making Latest harder to use. I know research and metrics show that Latest has less engagement than Home but people who use Latest are different users, not worse. They may not engage as much by choice but you wonât convert many in that specific batch to being on Twitter more by making Latest harder and harder to use, instead you risk losing them entirely. And last but not least, itâs not a feature per se but I would love to see Twitter become the industry leader on harassment, spam, misinformation, etc, not only by working with with governments and NGOs â but by working with other companies and our peers there who also want a safer and more secure internet. Together we move farther than we do alone, and a healthy internet is an internet thatâs healthy for every participant around the world. â©
Designing A High Signal Interview Process
Crafting a great interview process is difficult1, especially for software development where a company is often trying to assess years of specialized knowledge and potential in only a few hours. The best interviews are said to feel like a discussion amongst peers, where each side is providing the other with signal about what it will be like to work together. Candidates share signals about their experience and thought process, while interviewers help provide signal and insight about a companyâs values, the working environment, the state of a company, and more.
Crafting a great interview process is difficult1, especially for software development where a company is often trying to assess years of specialized knowledge and potential in only a few hours. The best interviews are said to feel like a discussion amongst peers, where each side is providing the other with signal about what it will be like to work together. Candidates share signals about their experience and thought process, while interviewers help provide signal and insight about a companyâs values, the working environment, the state of a company, and more. The interview process for a job in a highly collaborative environment should be highly collaborative, providing a lot of feedback and insights for both sides along the way. The interview I am imagining is nimble and fair, mimicking the day to day work of a software developer in only 3-4 hours over four rounds. Any shorter and it may be hard to glean enough understanding of who a candidate is. If it goes much longer then that it'll be unfair to candidates who can't dedicate a whole day (or two) to interviewing with your company, and will be an increasingly expensive amount of resources to expend as your company grows. Transparency There aren't any benefits to being secretive about the process so I recommend letting a candidate know ahead of time what to expect from their day with you, beginning the process from a perspective of mutual respect. Given a relatively short time constraint the ideal process would seek to emulate the product lifecycle at your company, with time at the end to fill in any gaps you and the candidate may have. The highest-value work that can be done in that time is to build a small feature end to end with partners who will be working day to day with this developer. The Product Design Round The first part of most product development lifecycles is one that focuses on communication and iteration, taking a whole world of possibilities and whittles it down to a tangible output. Thatâs what weâll aim to accomplish by pairing up the candidate with a product manager or designer, to discuss the feature theyâll be building together. The product design portion of the interview would start by walking through the scenario the company finds themselves in, the constraints they have, the customer needs, and anything else that is necessary to consider for what weâll be building. Acting as partners both sides would work together to develop the context and thoughts into an idea, ultimately leading the candidate towards a defined version of the idea that theyâve worked together to build. There should be enough wiggle room to let a candidate's creativity shine through (which is in itself a useful signal), but ultimately by the end of the interview there will be a defined spec and designs of what needs to be built so the candidate can be prepared for their second interview. If you donât reach a well-defined spec through collaboration thatâs not necessarily a dealbreaker, not all engineers shine at product design. If you havenât reached consensus on what a good product spec would look like, you should pause with about 10 minutes left to discuss what an ideal solution is. This will allow the candidate to familiarize themselves with the solution theyâll be building in the following interview, and it will also clue you into other important signals such as communication skills, how they collaborate in situations that have ambiguity, and their ability to respond to feedback (positive and negative). The Cross-Discipline Round Depending on if you're a front-end or a backend engineer, your second interview will be with a partner from the alternative team. (A backend developer would be asked to discuss the front-end portion in the second interview, and a front-end developer would be asked to discuss the backend needs.) In this case weâll assume that a front-end engineer is being interviewed, so the discussion would involve your teamâs backend engineer, centering around high-level architecture, API design, and other ways that a client would interact with a hypothetical backend. This interview would not be a coding interview, it would be a high-level discussion with a technical partner. The pair would discuss how the feature should work end to end, not asking a front-end engineer to understand the specific implementation details but to know where the front-end and backend would interact with each other. Knowing what SQL queries are occurring under the hood isn't as valuable day to day for a front-end engineer, but having a good idea for what a solid API looks like or how JSON should be shaped for the necessary data is highly beneficial. By the end of the interview the candidate should walk away with a fully fleshed out spec, one that makes sense for the problem they're looking to solve. The Proficiency Round This is the interview where a candidate will implement the feature we've designed and spec'd out. Much like in the first two interviews itâs important to establish early on that the candidate knows that the developer across the table is their partner and is there to help, answering questions, even pair programming if necessary. We want to get a feel for the kind of code a candidate writes, but we also want to minimize the context necessary to solve a problem. It's difficult to write code on demand, especially when there's a time constraint. To balance those requirements we wonât drop the candidate into a huge code base, and will use common platform/framework conventions so they feel as familiar as possible. Our top priority will be to build something that works, acknowledging the tradeoffs that were made along the way, and talking through where improvements could have been made (if there's room for improvement). The output we want to see is a working solution to match the spec weâve built upon in prior exercises, if this were a test this would be where the candidate shows their work. The best candidates will be the ones that not only have a good solution to the problem, but also communicate well in the process of building it. The Calibration Round If youâve ever walked out of an interview hoping the first person you met hours ago didnât misunderstand a slip of the tongue or an unclear thought, you know how awful it can be to sit with that feeling for 24, 48, or even 72 hours as you wait for a response. Sometimes a company feels like if they had only spent a little more time with a candidate they would have gotten the signal they need to make a solid hire or no hire decision. It's much better for both sides to figure any open questions before a candidate leaves the office, so thatâs what this interview accomplishes. This interview is a candidateâs opportunity to ask questions they have for the company, and a companyâs opportunity to get answers to any open questions they may still have about the candidate. Before starting this interview let a candidate know that the three interviewers they met will be taking 10-15 minutes to try and figure out what questions they have. Itâs been a stressful couple of hours so do whatever you can to ease the candidateâs nerves. Assure and reassure them that this is a normal part of the process no matter how well they did, itâs not always feasible to answer all the questions you may have about a candidate in a short period of time. But also let them know that you want both sides to feel confident they know what working with the other side is like, this is a big decision to make. The extra interview session is bidirectional, the candidate will have plenty of time to have their questions answered as well. Offer some water, let them use the restroom, ask them if they want to take a few minutes to themselves or even offer an employee for them to hang out with, whatever they want after a long day. At first this short gap may seem anxiety provoking, almost like sitting through judgment day, but as long as a candidate knows upfront about the process and motivation it should be a lot less uneasy than days of wondering about how an interview went without an opportunity to correct the record. Due to the imbalanced dynamics of this experience time-boxing your team to 15 minutes of discussion is a must. In this time a team should be coordinating to figure out what open questions remain, your goal is to figure out if you have enough signal to make a decision, and if not what needs to be asked to get enough signal. You can dive in more to the solutions a candidate came up with, discuss proper leveling, interpersonal dynamics, and whatever else matters to your team. If the interviewers agree that the candidate did well and donât have any open questions then this last interview will be short. Provide the candidate with 15-30 minutes to ask about whatever they may want to know, and then let them go destress. If youâre still looking for more signal then the process will be extended a bit longer, accounting for the questions you want to answer. The candidate will still have 15-30 minutes to learn more about your company, but youâll also have 30-45 minutes to learn more about the candidate. You may want to clarify a response the candidate gave, which is easy to do with them still here. If you didn't get a good read on their product design skills, have an exercise prepared where they'll need to go in depth a little more. If it was the high level architecture discussion then ask them to design a slightly more complicated system. If you weren't sure about their feature development, work to expand upon what you built in the last session. Like every interview process both sides are trying to leave with as few open questions as possible, so use this time to close out any open doors. Following Up While a high signal interview process is unsurprisingly about getting a lot of signal in a short period of time, itâs also one that aims to have candidates and companies be respectful to each other in the process. A good interview process is only made better when a candidate receives an answer in a timely manner. Once the interview has wrapped up it's ok to take some time to figure out your thoughts, but donât take too long. If the process went as expected the interviewers should have a good idea about whether they want to hire a candidate after the third interview, so all thatâs left is to figure out the specifics. Itâs not unreasonable to come out of an interview to pressing work but itâs important to close this loop, try to make an official call by the end of the day (or early the next day if the interview took place in the afternoon) so the candidate will remember this process favorably no matter the result. And that's it. This process is quick, has depth to it, and provides a much more accurate feel for what working with a person is like than a whiteboarding exercise. It's a flexible enough approach that you can tailor the interviews to your company's needs, and I would highly recommend doing so, reevaluating what's working and what isn't. We should always be reevaluating our interview processes because talent evaluation isn't easy and needs change as a company and the industry move forward, so if you have ideas for improvement Iâd love for you to share them with me. This post was surprisingly difficult to write in a clear and concise manner without making a formal write up feel so formal, and not losing the message by being too informal. It really took a village so thank you to the plethora of people who helped by providing feedback and editing my drafts, most notably Tamar Nachmany, Amro Mousa, and Brian Michel. There are a lot of tradeoffs when it comes to interviewing around preference, fairness, complexity, bias, and more. To be upfront I won't cover any of that in this post, but know that they were considered, and some are listed in the appendix below. I have another post planned about how every interview process is flawed and biased, but for now the most important thing to keep in mind is that every interview process, whether it's a take home project or writing code on a whiteboard provides both sides signal, but comes with tradeoffs Tradeoffs are a necessity because thereâs no fair way to evaluate in only a few hours what working with a candidate or a company will look like for the next few years, so it's important to acknowledge as a company which tradeoffs you're willing to make. Caveats, Disclaimers, and Footnotes This is written from the perspective of a company, but all of these considerations are just as important to a candidate understanding a company they hope to work at for a long time. Hiring criteria differs, for example sometimes you need to hire the best applicant available vs. hiring someone who can do the role, so you should be flexible and define that ahead of time. This may not work for your startup, this may not work for a big tech company! But I believe it's a process worth considering if your company is one where engineering, product, design, and other organizational functions are expected to work well together. Assume that this is an on-site interview, something that may be less common right now, especially with the rise of remote work. The process should still be doable in a remote setting, but there may be some adaptations necessary. â©
The Best Twitter Bio? The Humble Tweet
Tell me who you are in 160 characters. I'll wait while you try and achieve the level of nuance necessary for the task. This constraint is why you end up with generic Twitter bios that don't tell you much about someone and all look like:
Father, cyclist, biz-dev, and fighting every day for the Quebec sovereignty movement. Working on saving democracy @Meta, ex-Palantir, ex-Accenture, ex of my ex.
Kinda hard to stand out, right? The inability to differentiate yourself on a platform built upon self-expression has always felt surprising to me, so I started to look for alternative means of letting people get to know more about me. The most common approach to gain additional room for expression is to use Twitter's Website field, linking out to a more information-rich bio. But that jump to the web is an opportunity to lose focus, especially in a world where nobody has the attention span to read (or leave Twitter). There are even solutions like Linktree that build upon the link to link to a link of links, letting those links speak for you.
Tell me who you are in 160 characters. I'll wait while you try and achieve the level of nuance necessary for the task. This constraint is why you end up with generic Twitter bios that don't tell you much about someone and all look like: Father, cyclist, biz-dev, and fighting every day for the Quebec sovereignty movement. Working on saving democracy @Meta, ex-Palantir, ex-Accenture, ex of my ex. Kinda hard to stand out, right? The inability to differentiate yourself on a platform built upon self-expression has always felt surprising to me, so I started to look for alternative means of letting people get to know more about me. The most common approach to gain additional room for expression is to use Twitter's Website field, linking out to a more information-rich bio. But that jump to the web is an opportunity to lose focus, especially in a world where nobody has the attention span to read (or leave Twitter). There are even solutions like Linktree that build upon the link to link to a link of links, letting those links speak for you. There are plenty of hacks one can imagine using, more advanced users have taken to pinning an âintro threadâ, but what I was looking for was an option that would let me be more expressive than a bio, felt eloquent, and was native to Twitter. That's when I remembered the humble Twitter carousel. You may have seen carousels serving you ads for apps or services that everyone is excited about like⊠VMWare Cloud. But did you know that Twitter's advertiser tools are available to everyone, without having to pay a penny? 1 That's right, you a regular Twitter user can have the same capabilities as a company that pays Twitter millions of dollars per year. I'm going to show you how to take advantage of them so you can stand out just like household brands such as Disney, Nike, and of course⊠VMWare. Who's That Handsome Guy? About Me About Me â âš îš Joe Fabisevich îš âš (@mergesort) February 19, 2022 I've always debated whether my bio should link to my home page, my About Me page, or the best posts I've written. Thanks to the carousel I no longer have to choose, and I'm able to put all three front and center. The flexibility of having six links means I can also guide people to the office hours I host, and have room to spare for another link or two.2 As importantly as taking the time to construct the tweet is remembering to pin it, that way anyone who comes to my Twitter account has this information visible immediately. Do you love red pandas? Love apps? Find out more about what we do! â The Red Panda Club (@redpandaclub) February 19, 2022 A brand can get really creative by linking to their homepage, their support page3, FAQs, App Store apps, and a whole lot more. The carousel is also a medium to play with, you can use custom artwork to show off your brand's style, which in my case is playful and silly. I created a tweet carousel for my indie development company Red Panda Club as a proof of concept, and pinned it to the top of the @redpandaclub Twitter account. Building Your Carousel The process for building your own Twitter carousel is hidden away, but surprisingly easy. Head over to ads.twitter.com and look in the navigation bar for a menu titled Creatives. Under Creatives you'll find a menu item titled Tweet Composer, this is unsurprisingly what you'll want to compose tweets. The URL will look something like ads.twitter.com / composer / XXXXX /carousel, where the XXXXX will be a random set of letters and numbers. You may run into various popups asking for more information, feel free to ignore âem! The only popup youâll see that you have to fill out is one asking you to setup a payment method and campaign to unlock a Creatives menu. 1 (You really don't have to pay Twitter anything to use the tools but you do need to have a credit card on file to use the Ads Tweet Composer.) And that's actually it, there's no real step 2 beyond composing your tweet. But to save you some time I'll mention a few things that may seem unintuitive. A) Your first step for customizing the carousel is to add media. You can't start creating a card by updating the website destination; that will only appear after you've chosen media for your carousel. B) Website cards can customize the text that displays under a card's image, but app cards always display the app name and app category under the image. C) Youâll be asked to choose an app carousel or a web carousel, but donât let that think you can't make one that mixes the two together. To do this you'll need to choose a website carousel and for any app you want to add you'll use your app's App Store or Google Play URL and manually upload your app's icon. D) The Tweet Composer will default to making your tweet Promoted Only. If you want the composed tweet to show up on your profile timeline remember to uncheck the Promoted Only checkbox. It's worth noting that a Promoted Tweet is a real tweet, but youâll only be able to find the tweetâs URL in the Twitter Ads dashboard. E) Cards do not render in third-party clients4, so if that's important to you I recommend including text and links in the body of the tweet. It doesn't look as good in native Twitter clients so it's your choice whether to prioritize this or not. Now that you know how to build a fun carousel, I'd love to see you get creative. I can't wait to see what kind you come up with. đ To add a payment method: Click the Create an ad button. You may be prompted to confirm your timezone, then click Let's go. A popup will display asking you to Choose your objective. Exit this screen by clicking the cancel button in the bottom right corner of the screen. In the top right corner you will see your account name, clicking it will show a dropdown option that says Add new payment method. Add a credit or debit card and from now on you should see the Creatives menu that you need to access the Ads Tweet Composer. â© I expect these links to change so I'm not particularly attached to them, and if I ever do change them I can always post an updated tweet. Alternatively using a link shortener is a good strategy for not having to change the underlying data.â©â© Twitter doesn't let website carousels link to custom URL schemes so I got a little creative with my support link. Using my redpanda.best URL shortener I linked to a page who's contents contain <body onload="javascript: window.location.href= 'mailto:abc@mywebsite.com'; ">, to work around the fact you can't use a mailto: URL.â©â© Unfortunately the underlying cards these carousels are built on arenât exposed in the v2 third-party API tweet payload, so they canât be rendered in them.â©â©
Creating Slick Color Palette APIs
The work of writing maintainable code is an ongoing endeavor and some of my favorite problems to solve are ones that build maintainable systems. Maintainable systems are ones you can learn once, easily manipulate, and ideally take from project to project. My favorite part of building maintainable systems is that it minimizes the amount of work I need to do when starting a new project, and like it is for many programmers hitting â + âȘ + N to start a new project is one of the most satisfying feelings in the world for me.
The work of writing maintainable code is an ongoing endeavor and some of my favorite problems to solve are ones that build maintainable systems. Maintainable systems are ones you can learn once, easily manipulate, and ideally take from project to project. My favorite part of building maintainable systems is that it minimizes the amount of work I need to do when starting a new project, and like it is for many programmers hitting â + âȘ + N to start a new project is one of the most satisfying feelings in the world for me. A color palette is something every well-designed app needs, and it turns out there are a lot of ways to solve this problem. If you don't yet have a good sense for how to construct a color palette, I highly recommend this post by Refactoring UI that explains the fundamentals of a good color palette. Generating a good color palette can be tricky if youâre new to the practice and can require some trial and error, so if youâd like a shortcut a tool like Coolors is a great starting point. I've spent years iterating on approaches to codifying color palettes in my iOS and macOS apps, seeking to create one thatâs flexible, scales well, and is easy to understand, landing on the version weâll explore below. We'll be able to leverage asset catalogs, create a clear sense of hierarchy, provide statically typed semantic colors, even take advantage of the built in features of SwiftUI. As a bonus, if you're working with designers, your palette will be so straightforward to modify that a designer can submit a pull request to change colors without ever involving you. Considering we'll be leveraging asset catalogs, the first step should be pretty intuitive, we should create an asset catalog to hold our color palettes. As I was piecing together different instructions and ideas this second step confused me, so I'll spare you the misery. You're going to want to make a different folder for each color palette you create, and you can do that by right clicking in the asset catalog and selecting New Folder. Now it's time to create our color palettes. I'm showing you the Night palette I created for my app, and below it are Spring, Summer, and Winter. Each palette has the same name but all of the colors are named the same, each palette will have colors named background-alt, primary, quaternary, etc. Do not miss this incredibly important step, guide your eyes towards the pink arrow on the right side of the image. You must select the folder you're adding colors to and check the Provides Namespace checkbox. This is what will enable our code to have a clear and consistent naming structure, matching the folderâs name to our palette's name. Now that we've got our asset catalogs setup, we're ready to write some code. We'll start by constructing a new Palette struct, and populating it with some Colors that we'll reference across our app. extension Color { struct Palette { let name: String var mainBackground: Color { Color(fromPalette: self.name, semanticName: "background-main") } var midBackground: Color { Color(fromPalette: self.name, semanticName: "background-mid") } var alternativeBackground: Color { Color(fromPalette: self.name, semanticName: "background-alt") } var primaryText: Color { Color(fromPalette: self.name, semanticName: "text-primary") } var alternativeText: Color { Color(fromPalette: self.name, semanticName: "text-alt") } var primary: Color { Color(fromPalette: self.name, semanticName: "primary") } var secondary: Color { Color(fromPalette: self.name, semanticName: "secondary") } var tertiary: Color { Color(fromPalette: self.name, semanticName: "tertiary") } var quaternary: Color { Color(fromPalette: self.name, semanticName: "quaternary") } } } This is a pretty standard palette and should cover most of the use cases you'll encounter in building an app. You can modify it to your needs, but the important thing is that the colors you choose need to match the names of the colors you declared in your asset catalog. But what is this Color(fromPalette:semanticName:) initializer? private extension Color { init(fromPalette palette: String, semanticName: String) { #if os(macOS) self.init(NSColor(named: "\(palette)/\(semanticName)")!) #else self.init(UIColor(named: "\(palette)/\(semanticName)")!) #endif } } You can just as easily use a private method in Color.Palette, something like private func assetCatalogColor(semanticName: String). I happen to prefer the ergonomics of a custom initializer, and this whole post is about improving ergonomics, so let's run with that. This initializer lives in Color.Palette and takes advantage of our namespaced folder structure to pull out colors from the asset catalog. palette unsurprisingly is the name of our color palette, and semanticName is the name of the color we're pulling out of it, such as primary, secondary, or background-main. Combining the two with a /, we'll get the primary, secondary, or background-main color from our current palette. All that's left is to define the palettes we'll be constructing. extension Color.Palette { static let spring = Color.Palette(name: "Spring") static let summer = Color.Palette(name: "Summer") static let autumn = Color.Palette(name: "Autumn") static let winter = Color.Palette(name: "Winter") static let day = Color.Palette(name: "Day") static let dusk = Color.Palette(name: "Dusk") static let night = Color.Palette(name: "Night") } Constructing palettes is pretty easy. All you have to do is instantiate a Color.Palette object with a name. That name must match the folder name, so in my app the palette name for Night will be Night because the folder we chose for that palete was "Night". Now that we have a color palette created, we should probably start using it. If you're using UIKit you'll likely have built your palette atop UIColor instead of Color and used .palette.primaryText wherever a UIColor is expected. But if we're building our app in SwiftUI, we can go the extra mile with just a few lines of code to leverage the Environment, making our palette easily accessible and SwiftUI-friendly. private struct ColorPaletteKey: EnvironmentKey { // We need to default to a palette so without any particular preference let's pick `day` static let defaultValue = Color.Palette.day } extension EnvironmentValues { var preferredColorPalette: Color.Palette { get { return self[ColorPaletteKey.self] } set { self[ColorPaletteKey.self] = newValue } } } This creates the preferredColorPalette property in our Environment. If you're not sure how the Environment works there are a lot of great resources on the internet like this post by Keith Harrison. struct HeaderView: View { @Environment(\.preferredColorPalette) private var palette var body: View { Text("Hello World") .foregroundColor(palette.primaryText) } } Now accessing our colors is a nothing more than two lines of code, how slick is that? Hope I didn't wear you out, there's a lot here, but when you throw it into Xcode you'll see that it's pretty easy to digest. After a few iterations I'm really happy with this solution and have been using it in my personal apps for a while now. It allows us to easily add or change colors, have a visual representation of our colors, with minimal code, and a pretty slick API. But no code is perfect (except for mergesort), so I'm always looking for improvements. I'm excited to hear your thoughts or suggestions, so don't be shy about reaching out.
It's Not Better If It's Also Worse
For a long time I've told people that I love technology and all it enables, yet dislike the technology industry and working in tech. People often find my statement hard to rectify, probably because they see the two as inextricably linked. Technology is an ever-changing process, one that pushes humanity forward through the application of science, and the industry has become (and arguably always has been) about capitalizing those mechanisms of change.
For a long time I've told people that I love technology and all it enables, yet dislike the technology industry and working in tech. People often find my statement hard to rectify, probably because they see the two as inextricably linked. Technology is an ever-changing process, one that pushes humanity forward through the application of science, and the industry has become (and arguably always has been) about capitalizing those mechanisms of change. The tech world today is capitalizing those innovations at an unprecedented rate. That's why the tech sector is the most dominant and rapidly advancing industry in history. That's also what makes it so difficult to express the discord between the progress people see and the disproportionate impact it has. People see the progress of technology in real time, and they only see the moral and ethical issues with the rapid advancement of technology after that technology has established a role in people's lives. No one had any problems with Facebook in 2012, but they sure do now. Why Is This Happening? Many solutions invented today are about replacing existing solutions with more efficient ones. That feels like it should be an undeniable good. What happens when you create a technology that gives you everything you had before, cheaper, faster, and with even more improvements and features â but only addresses 99% of the problems? That's what you're seeing play out in the tech sector today. Companies are built on efficiency gains, but they don't have to pay the cost of the inefficiencies they create. They're not punished for the externalities they cause. As long as you're not feeling the burden, if you're not in the 1% for whom the new technology is a worse experience, you see nothing but progress and get to reap its benefits. But those who bear the cost are shut out from the new experience or improvement in their lives. Often times they're not only shut out, but as the world modernizes and standardizes around these solutions, those people are falling behind through the inequities of being a have-not in a system where almost everyone else is a have. Technology For Good, Not Always So Machine learning is an amazing technology, it shows us what we always could have known if we were smart enough to spot patterns in enormous data sets. It's helping fight cancer, it's helping people who can't speak have a voice, it's helping power the recommendation engines that give us the answers we need to get through the day. But what happens when machine learning makes a mistake? What happens when the system you've built increases racial bias? The most common use of machine learning seems to be reshaping our interactions around algorithms that want to optimize us as consumers. There's no manager to talk to, there's no one who can look into this black box, there are few companies who will turn off their models that work for 99% of people. At the scale of millions, incorporated into our lives, these technologies that make the world a better place are making some people's lives worse. Cryptocurrency is having a moment, especially as billionaires bored during a pandemic have started shoveling money into NFTs. Is there a need for a global censorship-resistant currency? I'm not smart enough to say, but I can see the allure and benefits that drive crypto enthusiasts to push for it.1 What I am smart enough to say is that the environmental impact of cryptocurrencies is a nightmare. Supporters will say "just wait, Proof of Stake will solve these problems", and they're probably right. But PoS was introduced in 2012 and it's still not here in 2021. We're living in a Proof of Work world, why is it people bearing the costs who have to wait? How about when you've fixed your problems you can come back and try again. How much unnecessary damage to the planet has been done in the last 9 years in the name of advancing a technology? I'm not here to pick on machine learning and cryptocurrencies, these problems of unaccountability are systemic. Uber is an amazing innovation in global transport built on already existing infrastructure, except to the drivers. Amazon provides unprecedented logistics, letting you have anything you could ever want in two days, built on the backs of real human beings. Twitter is a real-time archive of human history, except it globalizes and nationalizes local issues, and disproportionately exposes marginalized communities to abuse and harassment. What technologists optimize for is an explicit choice thatâs made, even if the tools we use to do so render opaque results. So much of engineering is focused on the concept of minimizing friction, but minimizing friction is almost always focused on short-term benefits because thatâs what feels most rewarding. This is echoed by people who decry the death of investing in infrastructure, and as a society we seem to be trading our long term costs for short term happiness. Weâre letting technology go down the same path, even using it to accelerate that trend. Like any technologist knows technical debt eventually gets paid off, either by choice or by circumstance. Whatâs To Come The list goes on, and will continue to go on until there's a cost associated with making mistakes. Existing incentive structures in our society and economy donât factor in a price for the externalities of building something that causes damage, even if that damage is only borne by a small percentage of people. At the scale technology operates at, edge cases are millions of people. The sheer awesomeness of technology can lure you into a sense of moving forward. It may feel that the ends justify the means, that you're doing the right thing at a small cost. You too are creating or experiencing the advancement of humanity through technology, like billions have felt before us. But as they say, history is written by the victors, the people who are worse off through our advancements as always are being forgotten and erased. The drive to continue growing leaves little time for fixing mistakes when there's another frontier to capitalize. That only makes it harder for people who are left behind to be brought forward. We need to do better to understand not only the benefits, but to explore the costs of a new technology. The costs are real, and are felt by real people. I wish that by the end of my post weâd found a solution, but unfortunately systemic issues aren't fixed by meandering thoughts. The most I can ask you to do is to think about the externalities of your actions, to not accept new technology into your life without considering the tradeoffs, and to hold those building them accountable â the same way you would if it was your life being impacted negatively. Not that anyone asked for my opinions on censorship-resistance currency, but I do think the goal in a vacuum is laudable. Despite that I think people underestimate the value of tried and true banking systems that have been operating for over 600 years. I suspect much of the antipathy towards banks is actually peopleâs understandable disgust at our current financial system. I donât see how cryptocurrencies fix that, and in many ways by not having the promises of banks only make it worse.â©â©
Putting the U in GraphQL
GraphQL has been on my list of technologies to learn for a few months now, and last week I came across Majid Jabrayilov's post, feeling pretty excited to tackle the subject. The post was very good, but it didn't answer the one question I've had as I've gone through numerous exercises to understand GraphQL, how do I make GraphQL requests without a library?
GraphQL has been on my list of technologies to learn for a few months now, and last week I came across Majid Jabrayilov's post, feeling pretty excited to tackle the subject. The post was very good, but it didn't answer the one question I've had as I've gone through numerous exercises to understand GraphQL, how do I make GraphQL requests without a library? I've read about how to create a GraphQL query and how to integrate GraphQL on your server a dozen times, but one topic that's highly under-covered is how to make a GraphQL request from the client. In the world of GraphQL it's very common to reach for Apollo, a library that handles turning GraphQL queries into functions, leveraging tooling to turn those functions into type-safe API requests the client can make. While this is a perfectly reasonable approach, and actually a pretty good developer experience, it still didn't answer the questions I had as an ever-curious engineer, how would I do this on my own? I broke the problem I saw down into two smaller problems, request-generation and request-making. Generating a request, especially in Swift, it turns out is pretty easy. I really like the approach that SociableWeaver takes, leveraging Swift's function builders to let you build a type-safe directly in Swift. The second problem was a bit fiddlier. I knew that I had to make a POST request, and I knew the endpoint that was being hit, and through some trial and error (and a friend's help1), I was able to start making GraphQL requests without any external libraries needed. extension URLSession { func graphQLRequest(url: URL, query: String) -> URLSession.DataTaskPublisher { var request = URLRequest(url: url) request.httpMethod = "POST" request.addValue("application/json", forHTTPHeaderField: "Content-Type") let body = """ { "query": "\(query)" } """ let queryData = body.data(using: .utf8) request.httpBody = queryData return self.dataTaskPublisher(for: request) } // If using SociableWeaver or a similar GraphQL query generator, you can do it in a type-safe manner. func graphQLRequest(url: URL, query: Weave) -> URLSession.DataTaskPublisher { return self.executeGraphQLQuery(url: url, query: query.description) } } After looking over the above code a few times I realized that the majority of it was handling the creation of a URLRequest. That served as a hint to me that we could refactor the code into a custom URLRequest initializer. This would be less prescriptive about how the URLRequest is used, since my first code snippet assumes you always want to return a URLSession.DataTaskPublisher. extension URLRequest { init(url: URL, graphQLQuery query: String) { self.init(url: url) self.httpMethod = "POST" self.addValue("application/json", forHTTPHeaderField: "Content-Type") let body = """ { "query": "\(query)" } """ let queryData = body.data(using: .utf8) self.httpBody = queryData } // If we're going all in on SociableWeaver we can make a similar initializer that takes a `Weave` parameter instead of a `String`. } Now if you'd like to use URLSession.DataTaskPublisher you're free to by creating a URLRequest from our new initializer and using it, but you can also return a URLSession.DataTask or any other reason mechanism that involves a URLRequest. extension URLSession { func graphQLRequest(url: URL, query: String) -> URLSession.DataTaskPublisher { let request = URLRequest(url: url, graphQLQuery: query) return self.dataTaskPublisher(for: request) } func graphQLRequest(url: URL, query: Weave) -> URLSession.DataTaskPublisher { return self.graphQLRequest(url: url, query: query.description) } } That looks a lot cleaner, and our responsibilities seem a lot more well-divided. Is there room for tools like Apollo? Absolutely! I'm not going to pretend that my dozen lines of code replaces the value that a multimillion dollar company provides. (I'll only make sick jokes about it.) But before importing a library like Apollo, any library really, it's worth asking yourself whether you need a big solution for a small problem. Or maybe question the better question to ask before that is, have you really understood the problem you're trying to solve? But we still haven't really answered where exactly we should put the U in GraphQL. (I say after the Q since Q is almost always followed by U, but I'm open to feedback on that or the rest of this post.) Special thanks to Dave DeLong for his debugging prowess.â©â©
App Store [P]review
Apple's been in the news quite a bit lately over concerns that many apps on the App Store are little more than scams. Some of these apps aren't even functional, they don't provide anything more than a screen with no functionality, only a button to purchase an indefinite weekly subscription. Many developers and consumers are confused or surprised that Apple isn't catching these scams, given Apple has a process for App Review which every app must go through, and while I'm not surprised given the breadth of the problem, I find myself thinking it's very problematic for the digital economy and consumer confidence in buying services through what once was considered a safe place.
Apple's been in the news quite a bit lately over concerns that many apps on the App Store are little more than scams. Some of these apps aren't even functional, they don't provide anything more than a screen with no functionality, only a button to purchase an indefinite weekly subscription. Many developers and consumers are confused or surprised that Apple isn't catching these scams, given Apple has a process for App Review which every app must go through, and while I'm not surprised given the breadth of the problem, I find myself thinking it's very problematic for the digital economy and consumer confidence in buying services through what once was considered a safe place. Twitter, the company I work for, deals a lot with content moderation. I'd argue it's the largest existential threat to the company. Whether Apple likes it or not they've walked into the same position with the App Store. This may be news to them, having spent decades curating all sorts of content from music to workouts, but as the App Store has grown, they now serve billions of customers and work with millions of developers. Those developers are creating content that Apple has little control over, other than acting as a gating mechanism, and so their ability to exercise control over that content has diminished significantly. Letâs skip past the debate about whether or not Apple should have this much level of control or whether the system needs to be reformed. Instead I'd like to talk about where I think Apple can patch the cracks in a broken App Store, before it breaks itself apart or is broken apart from the outside. The App Store Apple treats every developer the same (đ), or at least let's say so for the sake of this argument.1 From my own work, what I've seen is that when you don't have any way of validating whether someone is a good actor or bad actor, the reasonable default assumption is that everyone is a bad actor, and that's how Apple treats developers. This leads to many false negatives and false positives, good developers getting much more scrutiny than they should, and bad developers sliding through when they are in need of scrutiny. There's no thorough process for validation, there's no process for restorative justice, only Apple doing their best to remain hyper-vigilant at all times, and accepting the human errors that come along with that. While you can't eliminate bad actors, they're always going to exist in any system, what you can do is minimize the total bad actor surface area, and minimize the effects of these bad actors. A simple equation: if you treat 100% of developers as potential threats, there's no way to avoid hyper-vigilance, but if you only have to watch out for 20%, then you can be five times more efficient at rooting out bad behavior such as scams. So we need a way to let bad actors tell us that they're bad? No, we need a way for good actors to signal to us that they're good, leaving everyone else in questionable territory. Apple needs an internal scoring system to know where to devote their investigative resources. If you don't need to pay attention to the well-meaning app developer trying to make an honest buck, you can devote more resources, or your limited resources to the people who haven't done the work to show that they're good actors. Incentive Design Letâs take a step back to understand our friend from the world of behavioral economics, incentive design. To quote liberally: An incentive is something that motivates an individual to perform an action. It is therefore essential to the study of any economic activity. Incentives, whether they are intrinsic or extrinsic (traditional), can be effective in encouraging behavior change, such as ceasing to smoke, doing more exercise, complying with laws or increasing public good contributions. Traditional incentives can effectively encourage behavior change, as they can help to both create desirable and break undesirable habits. I don't have all the answers for fixing the App Store, but I don't think you need all the answers up front to start improving the system. Taking what we learned about incentive design above, what I see Apple having is a resource allocation problem due to them not knowing whoâs complying with the rules and contributing to the public good. With that in mind, a scoring system is where I would invest resources to know whoâs having a net-positive and a net-negative effect on the App Store system. There can be many contributors to this scoring system. Right now reviews and downloads are already used, but they are gameable due to their public nature, an example of poor incentive design. Surely new metrics can be added, such as how many times they've passed the scrutiny of app review, or how closely the instructions a developer gives match the reviewer's expectations. While those are relatively weak signals, Apple surely has dozens if not hundreds more internal signals they can apply to understanding a developerâs net outcome on the App Store. And yet I think there's room for a much stronger signal. App Store Preview App Store Preview would work similar to Apple's current DTS system, where you can get hands-on help with a technical problem you're having. A developer should be able to get pre-approval for an idea2, in the context of their application, without having to build an entire feature (or application) before App Review deems it worthy. This would also provide context for future reviewers, knowing what to look for and what's changed. The more a pre-approved version matches the reviewer's expectations come review time, the higher the score would the developer would receive. The higher their overall score over time (by some to be established scoring mechanism), the less scrutiny they would receive in the future. More importantly though is the inverse. If someone doesn't go through review, they implicitly receive more scrutiny. Bad actors will be disincentivized to have their app in Apple's hands for longer, and to be put under a microscope. This aversion makes them inherently less trustworthy, and would lead to them getting more scrutiny in the future. By letting good actors prove they're good actors, we've isolated the bad actors to show their cards and prove through implicit means that they're not good actors. This wouldn't fix the mistakes that App Review makes, and bless them, it's a very tough job. This doesnât even solve many of the App Store's problems, itâs only one idea and there are many other problems that Apple needs to solve. But it does show that Apple has lots of levers to pull when designing resilient systems, and can lay the foundation for a system where Apple can trust developers. And that's increasingly necessary for maintaining the consistency quality Apple, developers, and customers all want for the App Store. As always, Iâm excited to hear your thoughts, and am receptive to feedback, so if you want to talk don't be shy about sending me a trustworthy tweet. To paraphrase George Orwell, "all developers are equal but some are more equal than others."â©â© Given this is time intensive, you could make validation finite, maybe 2-4 tickets per year, or even have a system where people can pay for more tickets to have more previews, something that would surely be valuable for many developers and Apple.â©â©
Empower Apps Podcast - Large Scale Teams
I recorded an episode of the Empower Apps podcast, where Leo Dion and I discussed a wide range of topics. We spoke about everything from how we scale app development to thousands of people and millions of users at Twitter, communication, documentation, people working together, and a lot about and the complexity of holding moral frameworks at a global level.
I recorded an episode of the Empower Apps podcast, where Leo Dion and I discussed a wide range of topics. We spoke about everything from how we scale app development to thousands of people and millions of users at Twitter, communication, documentation, people working together, and a lot about and the complexity of holding moral frameworks at a global level. Empower Apps: Large Scale Teams with Joe Fabisevich Original recording
Context-Bound Types
I've been thinking about privacy lately. No, not online privacy, but about how APIs can balance exposing the right amount of implementation details without revealing too much.
I'll walk through a task I find myself doing often when building iOS apps, creating a view controller with header view, and four different ways to go about it.
I've been thinking about privacy lately. No, not online privacy, but about how APIs can balance exposing the right amount of implementation details without revealing too much. I'll walk through a task I find myself doing often when building iOS apps, creating a view controller with header view, and four different ways to go about it. Regular View Configured as a Header SettingsViewController.swift final class SettingsViewController: UIViewController { private let headerView = UIView() private let tableView = UITableView() override func viewDidLoad() { super.viewDidLoad() self.view.addSubview(self.tableView) self.setupTableView() self.configureHeaderView() } func setupTableView() { self.tableView.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ self.tableView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), self.tableView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor), self.tableView.topAnchor.constraint(equalTo: self.view.topAnchor), self.tableView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), ]) } func configureHeaderView() { // Some code configuring self.headerView // ... // ... self.tableView.tableHeaderView = self.headerView } } For folks new to iOS development, this is a common approach I see when adding a header. It makes sense, you want to have a header, and a header is a view, so why not configure and style UIView to be the UITableView header. While this is a good first try, it lacks the encapsulation that makes your code easy to edit and reason about. Separate Class For The Header SettingsViewController.swift final class SettingsViewController: UIViewController { private let headerView = SettingsTableHeaderView() private let tableView = UITableView() override func viewDidLoad() { super.viewDidLoad() self.view.addSubview(self.tableView) self.setupTableView() self.tableView.tableHeaderView = self.headerView } } SettingsTableHeaderView.swift final class SettingsTableHeaderView: UIView { // Some code creating and configuring SettingsTableHeaderView // ... // ... } A naive approach to improve our readability would have been to move our configuration code into a function, but an even nicer improvement is to move it into its own class. This looks a lot better, it's easier to reason about and it's well-encapsulated. But a new problem this introduces is adding SettingsTableHeaderView into our moduleâs namespace. Now I'll admit this isn't the world's biggest problem, but as you start adding different view controllers with different headers, suddenly finding the right header view for a given view controller becomes difficult. Private Class for the Header SettingsViewController.swift final class SettingsViewController: UIViewController { private let headerView = HeaderView() private let tableView = UITableView() override func viewDidLoad() { super.viewDidLoad() self.view.addSubview(self.tableView) self.setupTableView() self.tableView.tableHeaderView = self.headerView } private final class HeaderView: UIView { // Some code creating and configuring SettingsViewController.HeaderView // ... // ... } } Now this is a solution that I'm really liking. We've moved SettingsTableHeaderView out of our moduleâs namespace and into one dependent on the context it's in, SettingsViewController. When referring to SettingsViewController.HeaderView inside of this class we can plainly refer to it as HeaderView, which is not only less verbose, but emphasizes the pairing between HeaderView and SettingsViewController. There is a downside to this approach though, the more views we add to SettingsViewController, the harder this file becomes to parse. Now again this may not seem like a big problem, but if you have a well encapsulated view, you may have many subviews that belong to either SettingsViewController or HeaderView, and your file can get pretty large. (Trust me, Iâve seen written some pretty large files.) Two Files with Namespaced Internal Classes SettingsViewController.swift final class SettingsViewController: UIViewController { private let headerView = HeaderView() private let tableView = UITableView() override func viewDidLoad() { super.viewDidLoad() self.view.addSubview(self.tableView) self.setupTableView() self.tableView.tableHeaderView = self.headerView } } SettingsViewController.HeaderView.swift extension SettingsViewController { final class HeaderView: UIView { // Some code creating and configuring SettingsViewController.HeaderView // ... // ... } } This is the approach I've settled on today. You'll notice that HeaderView is no longer private, but it's also not particularly easy to access publicly. You still end up with the benefits from namespacing the API, and this extension can go into its own file, unlike the earlier approach. If you were to accidentally misuse this API, it would be pretty clear. When calling HeaderView inside of SettingsViewController the call-site is clean and simple. But if someone were to attempt to use it from another class, they would have to reference the fully-qualified type, SettingsViewController.HeaderView. While Iâve walked through one example with four approaches, binding a type to its context is something you can do throughout a codebase. In an ideal world Swift would have a submodule keyword to make types less ambiguous, but in the mean time this is a reasonable substitute that developers can take advantage of. While we donât have a submodule keyword, we have a close approximation by using empty enums. One notable example is Combineâs usage of Publishers and Subscribers to help people have context and understanding for their subtypes. As always, Iâd love to know what you think or if youâve come up with better solutions, so please donât be shy about reaching out. Special shoutout to Jasdev for taking a very rough first draft and helping me turn it into something coherent.
Building Better Views (Part II), Next Steps
If you haven't checked out Part I, I recommend reading it because if you don't, none of writing below will make sense!
If you haven't checked out Part I, I recommend reading it because if you don't, none of writing below will make sense! Three Unanswered Questions 1. What happens when the views you want to configure are more complex? My recommended approach is to construct a one-time use struct, specifically for displaying in that one place. This type should only have the properties you need to render the view. struct HomeScreenCourseProgressViewDisplay { let course: Course let enrollment: Enrollment let customization: SchoolCustomization } Creating the ViewData should look familiar. We're going to do the exact same thing we did before. extension HomeScreenCourseProgressViewDisplay: CourseProgressViewData { var titleLabelText: String { return self.course.name } var subtitleLabelText: String { return self.course.author.name } var statusLabelText: String { return String.localizedStringWithFormat(NSLocalizedString("%@% complete", comment: "The percentage a course is complete"), self.enrollment.percentComplete) } var progress: CGFloat { return CGFloat(self.enrollment.percentComplete) / 100 } var imageUrl: URL? { return self.course.imageUrl } } Using this ViewData object is just as simple as it was before. On our home screen, we now create the struct, and configure our custom view with it. Same as before, just leveraging how lightweight creating types in Swift is! func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { guard let currentUser = self.userAtIndexPath(indexPath: indexPath), self.hasCoursesAtIndexPath(indexPath: indexPath) else { fatalError("Ruh roh"!) } let currentCourse = currentUser.courses[indexPath.row] let currentEnrollment = currentUser.enrollments[indexPath.row] let schoolCustomization = currentUser.school.customization let homeScreenDisplay = HomeScreenCourseProgressViewDisplay( course: currentCourse, enrollment: currentEnrollment, customization: schoolCustomization ) cell.customView.configure(viewData: homeScreenDisplay) return cell } 2. How does the ViewData pattern deal with user interaction? I advise keeping user actions in the UIView realm. You can continue using the delegate pattern, closures, or wherever your preferences may lie. If youâre looking to get a little more advanced, Iâd consider reading Dave DeLongâs A Better MVC series. 3. Where does logic code reside, and what happens if you have more complex transformations? The scenarios so far have worked great. The models you received from the server looked a lot like the way you plan to display them, but that's not always the case. Sometimes you're going to need business logic, and that's ok. This is the question I had the most trouble coming up with one answer for. I realized the reason I couldn't come up with one answer is because there isn't only one answer. Looking back at our Comment model, we see that there is a Date object in there. public struct Comment { let text: String let commenter: String let createdAt: Date let imageUrl: URL? } In our first example we simply glossed over the fact that we were translating a Date into a String, by using a simple function that already exists in a third party library. extension Comment: CommentViewData { var timestamp: String { return self.createdAt.timeAgoSinceNow } } But now let's pretend we don't have timeAgoSinceNow available to us. Where does that transformation code live? The answer is, it's up to you! Some people prefer to make an object to handle business logic, to make their code more testable. If it makes you happy to keep it in the ViewData file, go right ahead. If not, then don't. Who am I to tell people how to be happy? extension Comment: CommentViewData { var timestamp: String { let dateTransformer = DateTransformer(self.createdAt) return dateTransformer.asString() } private static func transformDateToString(date: Date) -> String { return someMagicalWayToTransformDatesToStrings() } } struct DateTransformer { let date: Date func asString() -> Date { return someMagicalDateTransformer() } } My personal preference is to use private static functions, keeping in tune with the functional nature of this approach. extension Comment: CommentViewData { var timestamp: String { return transformDateToString(self.createdAt) } } private extension Comment { static func transformDateToString(date: Date) -> String { return someMagicalDateTransformer() } } The important thing to note is that when it comes to business logic, you have the agency to structure your codebase however you'd like. The ViewData pattern isn't prohibitive or prescriptive, it's just there to aid you in transforming models into views. These are the big questions I've received while using this pattern over the last few years. I'm excited to hear your thoughts, and am always receptive to feedback!
Building Better Views (Part I)
As iOS developers, a lot of our work involves taking models from a server, and transforming them to be displayed on an iPhone or iPad. This sounds like a job for some declarative architecture. đ€
As iOS developers, a lot of our work involves taking models from a server, and transforming them to be displayed on an iPhone or iPad. This sounds like a job for some declarative architecture. đ€ If you ask 3 programmers how to define MVVM, expect to get 7 different responses. â âš Joe Fabisevichâą âš (@mergesort) April 14, 2016 Confession: Iâve never fully bought into MVVM. I donât think itâs worse than MVC. I use View Models as a place to store state and actions for View Controllers, and preferably stateless functions for manipulating data. In my experience, things become harder to maintain when they start becoming a crutch, as a place to put your code if it doesnât neatly fall into the Model, View, or Controller label. With this in mind, I realized we need an answer for configuring our views in a way thatâs maintainable, and ultimately transforms one or multiple models into a view. This led me to the idea of ViewData. I started working on this with @shengjundong at Timehop, and have been using it successfully across apps of varying sizes since. There are three parts to this approach: A UIView instance. This is your standard view that youâll be displaying in an app. It can be a regular class, or a custom subclass as you need. A ViewData protocol. This is whatâs going to keep track of the data that needs to be displayed in your view. Most commonly this will be a slice of a model, used specifically for rendering the view. A configure(viewData: ViewData) function. This is whatâs going to map your View to your ViewData. An Example Letâs start with an example, where weâre building a view to display a comment. It will have a few properties youâd expect from a comment view. A commenter, their avatar, some text, and a timestamp. To make it easier to visualize, letâs imagine it looks like this: We start with a simple model. This is what weâll be later manipulating for display purposes. public struct Comment { let text: String let commenter: String let createdAt: Date let avatarURL: URL? } A simple UIView subclass to display the comment. public final class CommentView: UIView { let titleLabel = UILabel() let subtitleLabel = UILabel() let statusLabel = UILabel() let replyButton = UIButton(type: .custom) let avatarImageView = UIImageView() } Now we get a little to the fun stuff. Weâll make our first ViewData protocol. This represents how we will render the data weâre trying to populate the UIView with. protocol CommentViewData { var title: String { get } var subtitle: String { get } var timestamp: String { get } var replyText: String { get } var avatarURL: URL? { get } } Letâs conform our model to our CommentViewData protocol. This will be how we tell our CommentView how it should display our model whenever it comes across an instance of it. // The original data source is made to conform to the protocol which we are using for display, CommentViewData extension Comment: CommentViewData { var title: String { return self.commenter } var subtitle: String { return self.text } var replyText: String { return NSLocalizedString("Reply", comment: "Text for replying to a comment") } var replyImage: UIImage? { return UIImage(named: "reply") } var timestamp: String { return self.createdAt.timeAgoSinceNow } } One thing to note is that the avatarURL property automatically conforms to the CommentViewData! As long as we plan to use it directly, we donât have to add it to our extension. Last but not least, we need to configure the CommentView with a CommentViewData. extension CommentView { func configure(viewData: CommentViewData) { self.titleLabel.text = viewData.title self.subtitleLabel.text = viewData.subtitle self.statusLabel.text = viewData.timestamp self.replyButton.setTitle(viewData.replyText, for: .normal) self.replyButton.setImage(viewData.replyImage, for: .normal) self.avatarImageView.setImage(from: viewData.avatarURL) } } Weâve got everything configured in a nice declarative fashion, but how do we actually use this? This is in my opinion the best part. Letâs look at the call-site. func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { // My own homegrown solution, you're under no obligation to use it of course đ let cell = tableView.dequeueReusableCell(forIndexPath: indexPath) as GenericTableCell<CommentView> // This is of type `Comment` let currentComment = self.comments[indexPath.row] // Comment conforms to `CommentViewData`, so we can use it directly! cell.customView.configure(viewData: currentComment) return cell } And thatâs it! All you need to do is pass the original model object to the view, and as long as it conforms to the right protocol, youâve got it working without any intermediate objects. This may seem like a lot of boilerplate, and to be honest, it's more than I would like. There are other languages with features such as row polymorphism or extensible records which would make this easier. Until Swift supports these language features, or macros, or more powerful tooling that can fill the gaps, this is the best solution Iâve found to enforcing good practices and leveraging compile-time safety for view configuration. Now you may also be thinking âsometimes my models donât map to how theyâre displayed one to one, how can I make that work?â Follow along with part 2, where we'll cover that, and a few other questions you may have. As always, I'm excited to hear your thoughts, and am receptive to feedback!
How To Keep Friends And Influence People Or Something
Maybe the real friends were the friends we made along the way.
Maybe the real friends were the friends we made along the way.
I quit using Facebook years ago, and only follow ~70 people on Twitter, which leads some to assume that I donât find keeping in touch with people to be a top priority, but that couldnât be further from the truth. My todo list is where I hold my priorities, and my friends and family are the biggest priorities in my life, so thatâs where I turn to for making sure Iâm staying close to them.
At first blush you may think that a todo list sounds incredibly impersonal, itâs actually a deep expression of caring. When people ask for more details, I receive pretty much universally positive feedback, so now Iâll share it with you.
Maybe the real friends were the friends we made along the way. I quit using Facebook years ago, and only follow ~70 people on Twitter, which leads some to assume that I donât find keeping in touch with people to be a top priority, but that couldnât be further from the truth. My todo list is where I hold my priorities, and my friends and family are the biggest priorities in my life, so thatâs where I turn to for making sure Iâm staying close to them. At first blush you may think that a todo list sounds incredibly impersonal, itâs actually a deep expression of caring. When people ask for more details, I receive pretty much universally positive feedback, so now Iâll share it with you. A Todo List? Yep, itâs as simple as it sounds. I have a list called Keep In Touch. Each entry on my Keep In Touch list is a person, with their phone number attached to the entry, that way I can text 1 them from there. Each entry also has a time interval, how often I contact them, depending on how close we are. Now I admit, this does sound a bit mechanical, but hereâs where the philosophy behind this list is important. If you call it a system it sounds cold, but creating a dedicated space for people in your life sounds a lot warmer. The word todo sounds like a chore, youâre being told what to do, but these are just reminders, not orders! The odds of me not talking to my best friend for a week are slim to none, but it does happen every once in a while. This todo list item serves as a nudge â hey, reach out to your best friend, even if itâs just sending a pulse. This almost always leads to a deeper conversation where you learn about whatâs going on in your friendâs life. It provides an opportunity to find out about them, and for them to find out about you. But sometimes it doesnât, and thatâs ok, in that case it just serves as a reminder that youâre in their life and they matter to you. Building A Schedule Your best friend though is a bit of an edge case, odds are youâre going to be talking to them quite a bit. This system works best for those people who you arenât in touch with every day, or even every week. I want to be clear, this isnât a formula. You should figure out what cadences work best for you in your life, this is what mine happens to look like. Every week My best friends and family. Every 10-14 days The next ring of friends, those who I spend a lot of my time with. Every 3 weeks People who I hang out with but may not be as close with. Every month People who Iâll see every once in a while, but whoâs lives I want to keep up with. Every 6 weeks People who I see a few times a year, but enjoy being around and want to keep in my life with a light touch. This continues on by adding a two or four week intervals until Iâve reached keeping in contact every four months. If I canât find a reason to reach out to someone every four months, the unfortunate truth is that I probably donât have energy to keep them close in my life. My whole list is about 30 people, give or take a few as friendships and dynamics change. Time has shown thatâs about as many deeper relationships I can handle, not including a romantic partner and the past and present work relationships I maintain. You Have A System, Now What? How many times in your life do you think to yourself âI should really get in touch with Samantha, it feels like we havenât spoken in foreverâ, and then done nothing about it? Each time a reminder comes up, itâs a forcing function. Letâs say Samantha shows up on my list today again, here are a few places my mind goes. We normally talk to each other every month, but as luck has it we ran into each other last week. 1. I actually spoke to Samantha last week 1a. Let me just cross her name off the list and move on with my day! 1b. Iâve really been talking to Samantha a lot, I know that I reach out to her every two months but maybe I should start reaching out to her once a month! 2. I havenât spoken to Samantha in a while 2a. Let me see what sheâs up to! 3. I havenât spoken to Samantha in a while. 3b. I donât really have the urge to talk to her, maybe weâre drifting apart? 3c. Our last three conversations have all been the same, I should re-evaluate how much space I have for Samantha, maybe checking in every two months is a better pace than every month? Going through this thought process gives you an opportunity to rebalance your relationships as your life and friendships change. We assume that friends are forever, and to some extent thatâs true. Youâll always carry them with you, but not equally at all times in your life. Itâs worth figuring out how you can be the best friend to your best friends, so you can be at your best. My system for keeping close people close is one of the most important learnings I have to share. Friendships are the most direct connections people have, and this process really helps put them into perspective. Now that youâre done reading, go say hello to a friend, improve a relationship, or make some new ones. If you have any thoughts or feelings on the idea, please be a friend and reach out. I would absolutely love to hear what you think. I write text, but only in the loosest of manners. With people across the country it's often a text, with my parents it's always a call, and with many people in my life it's grabbing lunch or dinner. You should pick whatever your preferred mode of communication is and go from there.â©â©
Pushing The Boundaries of Technology
Throughout history technology has aided humanity. Not the other way around. From the invention of fire, to the creation of the wheel, the printing press, and the personal computer, technology has acted as a multiplier for what humans can do. Technology doesnât exist in a vacuum, and it wonât stand as an industry by itself. We â as software developers â should always keep this in mind when creating technology.
Throughout history technology has aided humanity. Not the other way around. From the invention of fire, to the creation of the wheel, the printing press, and the personal computer, technology has acted as a multiplier for what humans can do. Technology doesnât exist in a vacuum, and it wonât stand as an industry by itself. We â as software developers â should always keep this in mind when creating technology. These days technology dominates our phones, financial markets, and arguably even political outcomes. It would be easy to say that the technology industry dominates our every day lives, and yet while accumulating negative headlines, has skirted accountability. So far waiting for technology leaders to make meaningful change has led to little change. Few people are choosing to build their businesses upon ethical practices, and those that do face an uphill battle against competitors that donât. It seems that if change will happen, it must happen bottom-up, with a plurality of voices speaking in unison. The software developers who are building this technology need to be having these discussions. It falls on us as much as anyone else to hold the industry to a higher standard. It may even fall on us more as we prosper from the system, and must help lift others up to ensure that technology isnât just a multiplier, but a positive multiplier. To start a conversation, below is a short list of ways we can improve our industry. Bring your moral compass to work I donât believe that people necessarily have to bring their complete selves to work, especially when you think about a company being a business. But I do think itâs incredibly important to speak from your moral center. The easiest way to do that is to work on things where your personal goals and moral beliefs are aligned with the work you are doing. The tech industry often feels like a vast land of amorality, because accepting morality means taking a stance and responsibility. It turns out taking on responsibility is difficult. But if youâre not working for what you believe, and you have the means to satisfy Maslowâs hierarchy, what are you even working for? Certification Thereâs something Iâve been thinking a lot about, bifurcating the software development industry with certification. As software continues to eat the world, this laissez-faire attitude around certainty canât continue. Software is the heart and soul of the tech industry, but software will become the lifeblood of many more industries over the coming years. There is no putting the genie back in the bottle, software has exponential value and a marginal cost to produce. Software has already played an important role in furthering humanity, and itâs not going to disappear. With that in mind, we must create safeguards to ensure a level of quality that we currently leave to faith. Critical infrastructure has to be built by certified developers who face penalties for code that causes damage. Writing uncertified code shouldnât be banned, anyone should still be allowed to write whatever code they want. Code is a form of expression and I have no intent on stifling it. But if the code you writes gains traction, or becomes critical infrastructure itself, you have to get it certified and have it built by certified developers. Trading progress for certainty makes sense if we want to have the industry considered trustworthy. Oversight and penalties Along with certification, there needs to be legal infrastructure to respond to software that causes harm. If software causes harm to others, either through malicious intent or through malpractice, someone must be held accountable. If a civil engineer or a firm create a bridge that falls down, those parties are considered liable. If a company writes software that leaks your social security number, youâll be lucky if they get a slap on the wrist. 1 As information becomes more and more valuable, this dichotomy becomes more and more unsustainable. The ever-expanding scope of software means that it will continue to have a growing impact on the world around us. But legislation is not keeping up, and that has to change. Lifting others up The great majority of software developers are very lucky to have marketable skills in an ever-expanding market. But one person canât reap all of those rewards, so Iâd argue it is a moral obligation to spread them around. You can use your skills to lift up others and create a better industry. There are already many people in your proximity who can benefit from your expertise and willingness to help. Developers are always learning from developers, especially from the ones who share the knowledge they have. Designers with a perspective on development can work together to create a better product. Business folks who now know whatâs possible to create new revenue streams and directions for their companies. If you donât buy the selfless angle of spreading your knowledge and giving your time, there are selfish benefits to helping out others. Instead of fighting by yourself for what you believe in, through mentorship and guidance you can create an army of people to fight for what you believe in. And I assure you that youâll learn through the process and become a better version of you. Thereâs so much space we have a direct effect on, and so much change we can make through that. Unionization I hadnât said it yet, but now I will. The u word, the word that strikes fear in the hearts of capitalists: unionization. I was having a chat with a software developer I respect and the subject of unionization came up. My friend: Iâve been lucky enough that I never personally felt the need to consider unionization, but I know that not every employer is so good with their people. Me: Thatâs the reason we should unionize. We can afford to on behalf of others. As an industry that is dominated by white males, this ought to sound familiar. Itâs privilege. And those who have privilege should use it to elevate others. Unionization would allow software developers to collectively bargain for the greater good of the industry. At its simplest, this is valuable from an economical perspective. Collectively we can negotiate better compensation for ourselves, but more importantly we can create a rising tide that lifts all boats â allowing those who arenât yet making as much to be . This will be of disproportionate benefit for underrepresented communities, and create more opportunity for economic mobility. We can also use it to shape our workplaces. Sexual harassment runs rampant and we wonder why there are diversity issues? Companies mandate forced arbitration and only remove it under pressure. Or build their companies on the backs of contractors who have no rights or recourse to improve their own situations. 90 day exercise windows for people who created a disproportionate amount of value during a companyâs life. Then we reach the next level, the galaxy brain of unionization. A form of checks and balances for software. Software developers are the ones who create software, and so we should have say in how itâs used. As an industry weâre creating more value than has ever been created before in history. Job mobility is at an all time high so itâs not like weâre short on options. And yet we havenât taken advantage of that as a collective. Software should be used in ways that align with our morals, values, and sensibilities about how we believe software should shape the world. Separately we can all ask for that to happen. Together we can demand it. Additional Thoughts From 2020 On Education This is one of my biggest hills to die on regarding the industry. The computer science curriculum should be completely revamped. Extend programs to 5 years, a minimum of two ethics courses, two philosophy courses, and provide real world job opportunities every fourth quarter. https://t.co/Ta4z5iCSb9 â âš Joe F. âą âš (@mergesort) June 30, 2020 I am convinced that students would be better served by a few semesters of reading Pascal, de Beauvoir, or Algorithms of Oppression than being forced into three semesters of classic, electromagnetic, and wave/optics-related physics courses like I was. â âš Joe F. âą âš (@mergesort) June 30, 2020 While weâre at it, throw in a couple of computing history courses. Teach the youths about all of today's ânovelâ problems that are actually reinventions or practical applications of CS theory from the 60âs and 70âs, and about all the women our industry is actually built on. â âš Joe F. âą âš (@mergesort) June 30, 2020 The tech industry can use an injection of the humanities because technology is built to improve humanity. Letâs not forget about the humans on the other side. Iâm open to suggestions, so please reach out if you think of anything Iâve missed and should include. I wonât use this section to argue that companies shouldnât take on this information at all, but really they shouldnât hold onto anything they donât need to run their product.â©â©
What I Read in 2018
When 2018 started I set out to read 10 books this year. Much to my surprise I ended up reading 25 books in 2018. The most important reason I was able to get through 15 more books than I'd expected was that when the year started I set a goal for myself to read at least 15 minutes every day. I ended up accomplishing that 328 times over 2018, meaning 9 out of every 10 days I made significant progress on a book. I was able to find time by replacing a couple of podcasts I'd listened to with time for reading, a strategy that netted me about 1-2 hours every week.
Without further ado, the books I read this year are below in reverse chronological order, newest at the top.
When 2018 started I set out to read 10 books this year. Much to my surprise I ended up reading 25 books in 2018. The most important reason I was able to get through 15 more books than I'd expected was that when the year started I set a goal for myself to read at least 15 minutes every day. I ended up accomplishing that 328 times over 2018, meaning 9 out of every 10 days I made significant progress on a book. I was able to find time by replacing a couple of podcasts I'd listened to with time for reading, a strategy that netted me about 1-2 hours every week. Without further ado, the books I read this year are below in reverse chronological order, newest at the top. I'd highly recommend reading the bolded ones, generally because they've shaped how I think or how I perceive the world. Most of the rest were pretty good too if you're looking for a suggestion. Why We Sleep The Inner Game Of Tennis Stubborn Attachments Brave New World Antifragile The Buddha's Noble Eightfold Path An Autobiography of John Stuart Mill Intercom On Jobs To Be Done Zen and the Art of Motorcycle Maintenance 1984 Communist Manifesto and Social Contract Siddhartha How to Create a Mind: The Secret of Human Thought Revealed Shop Class as Soulcraft The Purity Myth High Output Management Triggers Mindset Amusing Ourselves To Death Nudge Algorithms To Live By A Random Walk Down Wall Street A Brief History of Time Machine Learning: The New AI The Hard Thing About Hard Things Sapiens The most surprising thing about reading this year wasn't the books I read, but sheer amount of content from the internet I consumed. I save a lot of essays, blog posts, and articles in Pocket so I can read them on my time, without keeping dozens of tabs opened, in nicely formatted manner, and most importantly available offline. This year I was very cognizant of not letting articles pile up in Pocket, which meant I had to put in a concerted effort to either read posts or delete them when I realized I wasn't going to get to them. I devoted a little time every other Saturday to cleaning out my Pocket queue, pruning my list either by reading or deleting. The end result of that â I read 102 books worth of content from the internet this year. An additional surprise was that Pocket isn't where I read all my content, but I also read a lot directly in my RSS reader. I can't know for sure how much I read through RSS, but at the very least I'm sure it would have added on another 20-30 books. And that's what I read in 2018. I'm setting my goal a little higher in terms of books in 2019, my goal is to read 20. I really enjoyed getting into books, and all that I learned. I'm also setting a goal to counter-balance my internet consumption. I don't have a number, but I intend to read less articles and blog posts, and about the same number of essays. I really enjoyed the long-form content I read, but I'd also like more time in 2019 to work on side projects, so that time has to come from somewhere.
Questioning Vulnerability
As a mentor, I give a lot of advice. I give a lot of advice that comes from a breadth of experience. But my experience is rooted in the present, to remember how I felt earlier is an exercise in empathizing with a past version of myself. And memories are a fickle thing. In fact, there are many biases that affect how you remember an event, so itâs possible that my memories arenât even an accurate reflection of the reality that I lived.
One piece of advice I give often to newer (and more experienced) developers is to ask questions. Ask a lot of questions. Ask questions until youâre sick of asking questions, and you suspect the other person is sick of hearing them. Itâs going to take a while until you feel smart, or you feel good about your knowledge, but keep pushing through and putting yourself out there until you start to feel it.
As a mentor, I give a lot of advice. I give a lot of advice that comes from a breadth of experience. But my experience is rooted in the present, to remember how I felt earlier is an exercise in empathizing with a past version of myself. And memories are a fickle thing. In fact, there are many biases that affect how you remember an event, so itâs possible that my memories arenât even an accurate reflection of the reality that I lived. One piece of advice I give often to newer (and more experienced) developers is to ask questions. Ask a lot of questions. Ask questions until youâre sick of asking questions, and you suspect the other person is sick of hearing them. Itâs going to take a while until you feel smart, or you feel good about your knowledge, but keep pushing through and putting yourself out there until you start to feel it. Note to senior developers reading this: Never get sick of answering questions, and always make whoever is asking feel rewarded for having asked. Itâs half the fun of being a senior developer. I always asked a lot of questions when I was starting out. I suspect itâs a big part of what led to me becoming a better programmer. I was never afraid to ask a question, and now Iâm better, so it seems like good advice to give to someone who has a lot to learn. But what Iâd conveniently forgotten was that I too was afraid to ask questions. I forgot about how I would spend hours researching the question I wanted to ask so that way by the time I did ask it I would only look a little dumb rather than like a full on dunce. By the time I asked a question, I wasnât afraid of asking anymore, because I knew there was little downside. The lucky thing for me was that I learned a lot through this process, and that exercise ended up being a big part of my growth as a developer. Itâs an incredibly vulnerable thing to ask a question and open yourself up to feeling stupid. And admittedly, a lot of people in this industry do a great job of opening themselves up for questions and not making people feel stupid for doing it. Itâs one of the best things about an industry that can be less than ideal at times. I recently realized that I havenât felt this vulnerable in a long time. Part of this is incredibly exciting, it means that Iâm putting my ego aside and focusing on what I really want, growth. The other side of it though is that while the advice I give to ask questions is sound, it rings a little hollow since I donât feel that pain anymore when I ask a question. I try my hardest to be encouraging, to make sure no one ever feels stupid when they ask me a question, and to reward their curiosity and learning with attention and genuine effort. And if I donât know something, we work through it together, another way to show that being vulnerable is a-ok. But I want to remember how that vulnerability feels. There isnât really a way for me to replicate that feeling though. No matter how wrong I am about something, no matter how hard something I want to try learning is, if it doesnât work out or I give up, I can always fall back on doing what Iâm already good at. Thatâs just not the same scary place that an inexperienced developer finds themselves in, they feel vulnerable because there isnât that safety net to fall back on. Iâve always wanted to learn how to draw. Iâve always felt bad that I canât draw well, and had convinced myself that I just wasnât the kind of person who can draw. But recently after some life events showed me that I can start from scratch, that I can start over, that Iâm in control of what I learn and how, I decided to give it a shot. I picked up a copy of Drawing on the Right Side of the Brain a few weeks ago, and while Iâm only a couple of chapters in I already see an improvement. There are many reasons I decided to learn how to draw. I love wordplay, but words leave a lot to interpretation. I would like to make drawings and comics that express the full gamut of the feelings Iâm trying to convey. I would like to draw the way I see the world. I would like to have a creative outlet that has an iteration cycle on the order of minutes or hours, not weeks or months the way software does. And the list goes on. Even writing prose is more vulnerable than writing software. Whenever I write something, it never feels complete. Writing exposes a part of me that is subpar to the world, especially writing this post which comes from a place of inadequacy, not from authority. But a real way I can recreate the vulnerability Iâm seeking to understand, the way a newer developer feels, is to draw. And most importantly, to show people my drawings. Iâm going to start from a place where I know next to nothing, keep learning, and continue growing. Iâm going to ask questions, so many questions, and expose my subpar self to the world. So hereâs my first public drawing. It took me nearly three hours, but I learned a lot. âą I learned how layers work, and how they can save you a lot of time so that way my next drawing doesnât take three hours. âą I learned how the art pencil tool differs from the marker tool. More importantly, I learned how to combine the two to make things look the way I wanted them to look. Iâve always wondered why my drawings didnât look like the drawings I see from artists, and now I realized it was because I wasnât using the right tools⊠⹠I learned a lot about shadows and tone. Halfway through my drawing I realized I could throw away all the work Iâd been doing and combine colors to achieve the effect I was going for. It took me some experimentation, but the end result looked more true to what I had intended and was faster to achieve. âą Most importantly I learned that itâs all reversible. Iâve always approached drawing as a finesse activity, one that I was afraid of messing up, but like any creative endeavor itâs about expressing yourself. Being fearless makes expressing yourself infinitely easier. Note to developers reading this: Software is incredibly reversible too! Donât ever feel afraid to code something wrong, the worst thing that can happen is you delete it. As simple as these things may sound, they never clicked in my brain. And now they do, and thatâs the beauty of learning, now I see the world a little differently. While this post is about developers, the lesson applies to everyone learning something new. So if youâre learning something new remember that while the vulnerability can feel like a curse, it can also be a blessing â it means youâre learning.
Reclaiming My Time
The people will not revolt. They will not look up from their screens long enough to notice whatâs happening.
The people will not revolt. They will not look up from their screens long enough to notice whatâs happening.
George Orwell, 1984
The people will not revolt. They will not look up from their screens long enough to notice whatâs happening. George Orwell, 1984 The Revelation I'm overwhelmed â by content. This shouldnât come as a surprise, the term âinformation overloadâ has been around since the 1960âs and the internet has only accelerated our access to content. We're inundated by content wherever we go, willingly so in most cases. We can't go a minute without listening to music, reading a tweet, checking the news. Worst of all, we convince ourselves that it's ok, and that it makes us better people. In a world with infinite content and limited time, the most difficult problem has gone from finding great content to curating it. A well-balanced life is a healthy life. Contrary to popular belief, you can have too much of a good thing. In fact, gluttony is possibly the biggest sin of our times. Just because cereal can be "part of a complete breakfastâ doesnât mean you should have four bowls of Lucky Charms. With never-ending access to a near-infinite amount of the greatest content humanity has ever created, how can we say no? Without outside guidance, we have to focus our effort on maintaining our own information diets. Falling In Love With Content The year was nineteen diggity two and the depression hadn't hit yet. Or maybe it was around 2009 or so, it's hard to remember exactly when I discovered RSS. I stumbled upon a never-ending stream of articles written and recommended by people who were clearly my intellectual superiors. It was fascinating to walk in their footsteps, to understand how they were thinking, to live through their eyes. To many Twitter is a real time RSS feed with a social graph attached. It is also the most malleable technology product I know of. Your experience is completely in your hands to shape, and the breadth of content is near infinite. I chose to use Twitter long before working there, and created a similar experience to RSS, only centered around people instead of articles. Podcasts are an audio manifestation of RSS. A cross between the directness of Twitter and the thought-out nature of a blog post. They are the literal interpretation of giving a voice to your favorite content. And they're easy to consume; you don't even have to read. (Congratulations on making it this far by the way.) Thereâs no feeling like the initial rush of finding a person with new or interesting view points. And in turn, I decided to give them headspace. I would follow them on Twitter, start reading their blog, and listen to their podcasts. This exposed me to a wide variety of thinking. It allowed me to understand their perspectives. It taught me about topics I'd never thought much about: economics, philosophy, behavior psychology, and how it all blends into the world we see before us. Pulling Back Once I understood someone's line of thinking, I became attached. The idea of removing someone from my digital life was scary. I knew that I would miss their perspective. But I also knew that I needed to move on; I didn't have enough mental capacity to track everything everyone interesting does, and so I felt overwhelmed. Although it took a while to admit it to myself, the lessons I've learned from these people would stick around forever. I don't regret my consumption. The lessons I've learned will be carried with me through the rest of my life. I learned to make coherent arguments. I learned to look at problems through different lenses. I learned the joy of seeing another person's perspective. Most importantly, all the learning made me feel rewarded as a human being. I didnât want to halt my consumption entirely, but I needed to cut down, to free up my personal bandwidth. The fear of missing out drove me to think I would miss out, but I had to trust that some content would be enough. Now when I find someone new to follow, I decide to pick whatever medium feels most appropriate for the content. This lets me hear their voice in the way most suited to them, and frees up space for other people. Continuous Improvements I donât have a one-size-fits-all answer for improving content consumption. The process is a continuous one. I find out what changes make me happy, what makes me unhappy, and it gets a little better all the time. I still read a ton of blogs, probably too many for my own good, but now I donât read them right away. I give myself a little more time to decide whether I want to read something before I read it. I add blog posts to my Pocket queue and give them a day or two to sit there. If an article still looks interesting, then Iâll give it a read. If it doesnât, now I donât feel so guilty about throwing it away, thereâs surely something else that will fill that space. This process is intentionally slower and more deliberate. Just because an article jumps out at me initially doesnât mean itâs necessarily good. A benefit of this is that Iâm more thoughtful about what content I consume. I also find myself getting better at identifying content that is likely to be evergreen and valuable, versus something thatâs just attention grabbing. I look back and realize that without leaving space for others in the content I consumed, I was resorting to this same pattern of all or none thinking. Rather than allowing myself to be bored for a moment, I chose to allow myself to feel overwhelmed. You can do it too Unfollow that person on Twitter. Stop reading that blog. Unsubscribe from that podcast. Give yourself a little time to digest. Leave some space for boredom and serendipity. Serendipity needs space to grow. Also read a damn book or two.
The Learn Swift Podcast - Episode 28
I recorded an episode of @learnswift_fm with Steven Sherry a couple weeks back. We had a great chat about contracting, computing history, philosophy, Smalltalk, and even a little bit about Swift. I think youâll really like it, and recommend giving it a listen.
I recorded an episode of @learnswift_fm with Steven Sherry a couple weeks back. We had a great chat about contracting, computing history, philosophy, Smalltalk, and even a little bit about Swift. I think youâll really like it, and recommend giving it a listen. The Learn Swift Podcast: #28: Joe Fabisevich Original recording
15 Minute Meetings
How many times have you been called into a meeting only to realize five minutes into it that youâre likely going to speak for sixty seconds⊠if youâre lucky? You potentially hold one piece of valuable information, and the rest of the discussion doesnât concern you much. There are probably four other people like that in the room as well.
How many times have you been called into a meeting only to realize five minutes into it that youâre likely going to speak for sixty seconds⊠if youâre lucky? You potentially hold one piece of valuable information, and the rest of the discussion doesnât concern you much. There are probably four other people like that in the room as well. No company does meetings well. People will always be struggling to get better with meetings until meeting nirvana is attained, and there are no meetings. I recently read about an idea that the Pinterest engineering organization has been trying out to enable their engineering team more contiguous blocks of time for development. They only have meetings on Mondays and Fridays, giving them a long stretch of time for work in the middle of the week. Inside the organization this is seemingly being considered a success. Externally, it was met with mixed reviews â and understandably so. It appears the number of meetings and time spent in meetings hasnât gone down, it was just time boxed differently. Iâm glad Pinterest didnât just stick to the status quo, this is still a net-positive, but it doesnât bring us closer to our goal of meeting nirvana. There are two ways to reduce the amount of time spent in meetings. Less meetings or shorter meetings. Letâs imagine an ordinary 30 minute team catch up meeting. Not a brainstorming session, not a quarterly planning meeting, and not a 1 on 1. If we had to make the meeting only 15 minutes long, then youâd have two options. Compress whatâs important into 15 minutes or lose 15 minutes worth of valuable information. I don't know about you, but I've rarely been in a meeting where every moment felt high signal and low noise, so let's work with the compression scenario. The compressed information is going to very likely be the information you need to get a team on the same page. But what if we canât cover everything in 15 minutes you say? Then whoever needs to stay to get the remaining details hashed out can stay. Everyone else is free to leave, with no stigma attached, and more information than they had coming in. Meetings are still booked for 30 minutes to accommodate the people who may need more time, but most people will be able to get more time to get back to their non-meeting duties. The great manager Michael Lopp, aka Rands once said If a meeting completes its intended purpose before itâs scheduled to end, letâs give the time back to everyone. What if we scheduled all meetings with a goal of giving people their time back? Iâd love to hear what ideas you may have for that.
Startup Code Quality
I was asked
Is the code quality at an early stage startup higher or lower than the code quality at a bigger company?
This is a really good question to ask if you're a developer looking to join an early stage company. To answer this question though, I'd like to take a step back.
I was asked Is the code quality at an early stage startup higher or lower than the code quality at a bigger company? This is a really good question to ask if you're a developer looking to join an early stage company. To answer this question though, I'd like to take a step back. Code is never context free. At any company, writing code is the means to solve a business problem. Early on the problems ahead of you are very fuzzy and subject to change. After two years of working on your business, it's very likely the code you wrote initially will be very out of sync with the problems you're solving today. The natural inclination is to think that the code you wrote was bad. But that's not true! What's really changed is that as your company has grown, it's developed product/market fit. The problems you're trying to solve today are more well defined, clearer, and can be more properly scoped. This means that you'll now be able to have engineers assess what problems they should be solving better. They will be able to tackle more specific complex problems with this bandwidth. In a context-free manner, some people will look at the problem and say "this code is so much better than what we used to write". Large organizations have their flaws too. Bureaucracy and process can get in the way of writing code. This can manifest in you not getting to write code or your project being cancelled. These are their own set of problems, but is slightly different then creating difficult to work with code. But to answer the question: Is the code that you're writing early on in a companyâs life bad? No. Is the code you're writing as company grows bad? No. Are the problems you're solving clearer as the company grows? Yes.
The Dog Ate My Take Home Assignment
It has become quite common for companies interviewing engineers to give candidates take home tests. These consist of an interview problem (or problems) which they can work on in their free time. This gives the candidate the benefit of not being under pressure in a high leverage interview setting. This gives a company the benefit of seeing how a candidate works in a normal work environment.
I had one of these recently, and to say it could have gone better would be an understatement.
It has become quite common for companies interviewing engineers to give candidates take home tests. These consist of an interview problem (or problems) which they can work on in their free time. This gives the candidate the benefit of not being under pressure in a high leverage interview setting. This gives a company the benefit of seeing how a candidate works in a normal work environment. I had one of these recently, and to say it could have gone better would be an understatement. Though I could start whenever I wanted, it was a timed exercise. I was given 3-4 hours to complete the exercise, though it was emphasized it should take 1-2. Time to completion would be a factor in determining how well you did. This is the first time that I've had a timed assignment, but it seemed fair to me. The exercise itself is unimportant, and I can't share the details of it, but I have no qualms about it. The only thing you have to know is that I had to generate a command line app, so the reviewer could run the app with variable command line arguments. Just under two hours later, I had managed to complete the assignment and was ready to email it back to the company. I'd named the project the same name as the company, and so the binary that was generated was of the same name as well. I copied the binary to my desktop, to make it easier to drag into the Mail app, hit overwrite, and, wait... hit overwrite... I deleted the folder holding all the code. All I had left was a binary. The reviewer would have no idea how it does what it does. I panicked, and quickly emailed the company to tell him that the dog had actually eaten my homework. I didn't hear back, so a few minutes later I emailed them the generated binary from my assignment. I told them to run it, to verify that I did actually make something. In the mean time I got started on restarting my assignment. I checked for the last Time Machine backup, and found one from over an hour back, and used that as my starting point. About 40 minutes later I was done recreating the assignment to the best of my abilities. I emailed them the assignment, along with a new binary, and hoped for the best. Amazingly. I almost did the same thing, copying the binary to my desktop, but this time managed to not hit overwrite. Moral of the story, think before you overwrite.
The Future Will Be Signed
Cryptography is becoming more important in our every day lives and thereâs no way around it. Whether itâs the calls from governments to ban encryption, come up with âresponsible encryptionâ, or to violate norms and laws, cryptography is playing a role in shaping our society. Iâd like to approach the role of cryptography from the other perspective though, from the side of helping us prove facts about the world around us.
We are entering an era where technology empowers people to create artificial evidence for stories and narratives. While we canât yet create facts, weâre approaching a point where artificial evidence looks very believable.
Cryptography is becoming more important in our every day lives and thereâs no way around it. Whether itâs the calls from governments to ban encryption, come up with âresponsible encryptionâ, or to violate norms and laws, cryptography is playing a role in shaping our society. Iâd like to approach the role of cryptography from the other perspective though, from the side of helping us prove facts about the world around us. We are entering an era where technology empowers people to create artificial evidence for stories and narratives. While we canât yet create facts, weâre approaching a point where artificial evidence looks very believable. Nvidia is using machine learning to generate fake pictures of humans that seem so real that humans canât tell theyâre fake. Lyrebird is building technology that allows people to enter text, and they will generate a rather convincing audio file of someone (like⊠say⊠Donald Trump) speaking it. Watch this video clip of Barack Obama saying things he never said. I guarantee you thereâs at least a few people who are fooled by this today. Today you may think âthis doesnât quite sound like Donald Trumpâ or âthat doesnât quite look like Barack Obamaâ, but technology only moves forward. Itâs going to get better and better. What happens when you canât believe your eyes and ears? What happens when you have to question every picture you see, every sound you hear, every video you watch? We need to have answers before this becomes a problem. Weâre going to need a way to prove the authenticity of a piece of digital content, everywhere, in a simple manner. This is where public key cryptography comes in. Our current solutions are noble efforts, but remain too complex. This infrastructure is going to have to be baked directly into the software that developers build, in a way that is transparent to the end user. A politician (or anyone) needs to be able to sign a tweet, audio recording, or video clip to prove authenticity of what they are saying. With the creation and fabrication of content being so easy, weâre going to need a model where the person creating the content can prove it is trustworthy, and otherwise it should be treated as inauthentic. Outlawing encryption and controlling cryptography is a really bad idea. It may end up that these technologies help us maintain a level of trust in our society.
Learning About Cryptocurrency
Cryptocurrency is all the rage these days. From Bitcoin to Ethereum to Ripple, to some silly sounding thing someone will come up with tomorrow, it's something people want to know about.
At the risk of sounding like a super noob, what's a good introduction to crypto? From the basics through understanding current landscape
â Benny Wong (@bdotdub) January 5, 2018
So you're looking to learn a little something about how these new technologies work? Well Benny (and anyone not named Benny), I don't have all the answers, but I do have two resources that people who are interested in the technical aspects should check out.
Cryptocurrency is all the rage these days. From Bitcoin to Ethereum to Ripple, to some silly sounding thing someone will come up with tomorrow, it's something people want to know about. At the risk of sounding like a super noob, what's a good introduction to crypto? From the basics through understanding current landscape â Benny Wong (@bdotdub) January 5, 2018 So you're looking to learn a little something about how these new technologies work? Well Benny (and anyone not named Benny), I don't have all the answers, but I do have two resources that people who are interested in the technical aspects should check out. If you want to learn about the blockchain, I would start here. The video is only 17 minutes, and yet covers everything you need to know about how a blockchain is built and works. If you want to learn about Bitcoin, I would start here, at the time marker 41:55. Everything I learned about Bitcoin in this episode of Security Now still holds up today, six years later. Steve Gibson was very early in understanding what makes the protocol interesting, along with its upsides and downsides. The only real thing that's changed since this was recording is the addition of exchanges like Coinbase, Gdax, Kraken, and others into the marketplace, as a way to centralize trading. A personal note, if you're just looking to invest in some cryptocurrency you probably don't need to understand the underlying technology. The investment side currently is a speculative market based on projecting who the winners and losers in this space are going to be, and for the most part that's relatively disconnected from the technology. And one more note, if you're going to invest in cryptocurrency right now, only put in however much money you're willing to lose. No one knows how the market is going to play out so I'd equate the whole thing to gambling at best.
Everyone Should Use Generics Except You
As I was on hour six of debugging how to read an object from the database, my brain suddenly noticed the slight difference in two lines of code. The compiler error had been off, too vague to help me realize that I was never hinting the correct type to the function. Generics had struck again. I cursed in the general direction of my cat (unintentionally), and moved on. There was nothing I could do but accept that we've all been there, and move on.
As I was on hour six of debugging how to read an object from the database, my brain suddenly noticed the slight difference in two lines of code. The compiler error had been off, too vague to help me realize that I was never hinting the correct type to the function. Generics had struck again. I cursed in the general direction of my cat (unintentionally), and moved on. There was nothing I could do but accept that we've all been there, and move on. The creators of the Go language have so far resisted the notion of adding generics. They have a well considered argument that adding generics into the language will add to it's complexity, so much so that the power of the feature will be outweighed by the complications that the feature brings. What proponents of generics say is that the core team is not properly considering all the benefits of generics. The language's surface will be simplified, your code as a consumer will be easier to write, and even that Go already has generics but only for certain blessed types that are built into the language. Combining the two thoughts above, I had a thought of my own, since everything's a remix after all. We boil down our problems to platitudes, as if fixing that one problem will be salvation for our existence. Functional is better than object oriented. React is better than Angular. Static is better than dynamic (it isâŠ). Writing generic code is one of those trade offs. It can be mind bending, it's no walk in the park, but it can be incredibly powerful. I don't personally agree with the Go authors, but I'll boil the problem down to a platitude of my own: I want generics in my language. I don't want anything to do with them myself 95% of the time, but I would love the features that others can build which capitalize on generics to make my life easier.
Thinking About Thinking
Two meta-skills that help a programmer grow more than just practicing their coding. Thinking about thinking, and focusing on focusing.
â Joe Fabisevich đ¶đłâą (@mergesort) July 26, 2017
Two meta-skills that help a programmer grow more than just practicing their coding. Thinking about thinking, and focusing on focusing.
â Joe Fabisevich đ¶đłâą (@mergesort) July 26, 2017Two meta-skills that help a programmer grow more than just practicing their coding. Thinking about thinking, and focusing on focusing. â Joe Fabisevich đ¶đłâą (@mergesort) July 26, 2017 How To Think About Thinking and Focus on Focus Don't focus on finding the perfect to-do list app. Once found, you still have to light the spark inside that keeps you going. It's that spark that moves you along the road; a road that stretches surprisingly far. I set a goal for myself in early 2017. I was going to spend a lot of time learning. I wasn't sure what this would look like. I gave myself time to figure it out. Things are rarely simple in life. I knew I wouldn't find the answer right away. I wanted to learn how I learn. Everyone learns differently. I needed to figure out the best approach for me. Before I even sat down to learn any topic in particular, I attended a Coursera class to gain perspective on learning. The course involved few weekends worth of work and I came away with great techniques and a deeper understanding of learning as a whole. Next, I considered the subject matter I wanted to learn. It had to be motivating: motivation has always been a sticking point for me. So I decided to focus on skill building. I wanted to combine these two thoughts--learning and motivation--to put myself in a better position to learn. A few months passed and that's when I realized: I still had a lot to learn about thinking itself. Reading I've recently returned to reading, or more precisely, listening to books. My favorite books focus on what's called "metacognition". Metacognition means the awareness and understanding of your thought processes. Metacognition unlocked a door for me I hadn't realized could be opened. I've always thought of myself as a person with good self-insight. As I began to read more, my doubts grew. Thinking Fast and Slow by Daniel Kahneman taught me how cognitive biases work. This knowledge left me both concerned, and, unexpectedly, relieved. My concern stemmed from the mental gymnastics my mind performs. I found myself especially prone to the attribute substitution bias, and a few others. The brain prioritizes viewing the world in a way that suits you. It's instinctual, protecting us from doubt and pain. It also keeps you away from new modes of thinking. Once I accepted that everyone's mind tries to do this, I began to open up to new possibilities. I was also relieved. This model let me understood why I thought the way I did. More importantly, I could now leverage that knowledge for further growth. My path started with a simple goal: I wanted to learn more. Now, I was ready to actually start putting these learnings into productive gains. Practicing Next, I needed to move from understanding myself to real world practice. In this, I learned three important lessons: 1) Think about thinking Understanding yourself provides the key to discovering your boundaries, limitations, and possibilities. These margins guide you to areas where you can grow. Even the smartest people are not able to learn something new any time. Their brains need rest, balance, and fuel. The more in tune you are with where you are mentally, the easier you can acquire and assimilate new information. Your ability to take in information changes over the course of your life both in the long and short term. For many people, it's easier to learn at 2pm than 10pm. The next day, well rested at 10am, it's easy again. This pattern isn't true on only a daily basis, your life will go through similar cycles as well. 2) Plan Time is the most precious resource you have, so deploy it wisely. I make a schedule every day, split up into half hour intervals. 1 If something takes longer than a half hour, bubble it in for two half hour intervals. If something takes less time, feel free to squeeze in a couple of tasks into a half hour interval. This is a technique I picked up from Deep Work, to help my daily planning. Each day's planning acts as a meditative exercise. Every morning, I think about the shape of my day. I list my goals, which serve as landmarks throughout the day. On review, I can decide whether I'm accomplishing those goals and making progress. After adopting this style, I quickly noticed I had a weaker grasp on my time than I thought I did. It is incredibly difficult to create a schedule then stick with it, exactly as planned. And that's ok. Each time I got off schedule, I could re-adjust and re-orient. I'd move around priority tasks, push others off for when I could give them the attention they deserved. When I found a task that kept getting bumped, I'd reconsider its merit. Over time you realize, "maybe this task isn't as important as I thought it was". Do this consistently, you won't look at time the same way ever again. 3) Focus The easiest way to stay focused is to avoid distraction. Isolate yourself however you need to. Physically If your environment isn't productive, change it. I don't work well in open offices where noise and conversation distract me. Other people can't work from home, they prefer the sounds of the world as a background hum. A coffee shop may be a great match to your style or the silence of a museum library. Mentally Meditation can boost your energy levels. I use Headspace to introduce a five minute refresher during my mid-afternoon. Meditation allows my mind to rest after it's been working for the entire day. Some prefer to start their morning off with fresh thoughts. Others like to clear their mind at night, making it the last thing they do. See what works best for you; maybe it's all three. Digitally Tucking away distractions help you focus on your task. I try to keep everything that's not immediately pertinent out of my sight. Surprisingly, hiding my Dock has been made me far less distracted. I used to spend my day distracted by red badge fever. Slack, Twitter, Things, and other badged apps would eat into my thoughts. Now I stare full screen at whatever I've got open with no little red badges to grab my eye or pull at my thoughts. Acts like browsing the web are now a conscious choice. When I'm writing code and want to check my daily schedule's progress, it's a choice, not an impulse. I still live in the real world and connect to these things but I am not prodded to do so. I act when I find some time. When I'm focused, I'm focused. When I'm distracted, I'm distracted. Training your brain to focus is like any other form of exercise. It's hard at first. As you root out distractions and adapt your environment, your focus muscle grows. As with all change and exercise, it gets easier and easier to avoid distractions over time. Next Still interested in figuring out how to grow more? If so, congratulations. It's a hard but amazing path to look deep into yourself and decide to make changes. I wish you well along your journey and implore you to move deeper in your voyage. This journey isn't about finding the perfect to-do list app. Once you've found one, you still have to find what's inside of you that encourages you to keep growing. Once you've found that, you'll be surprised at how long the road stretches. Ok, I have to admit, I can't always do this. Sometimes it's too rigid, and I'm not well disciplined enough yet to live my own advice. There are days where I don't have as clear a focus, and it shows in my schedule.â©â©
Handing Off Public Extension
This is a blog post by Jasdev Singh, originally published on jasdev.me. I'm re-posting here since it's directly tied to me taking over his project, Public Extension.
This is a blog post by Jasdev Singh, originally published on jasdev.me. I'm re-posting here since it's directly tied to me taking over his project, Public Extension. Firing off the first tweet for @PublicExtension on October 9th, 2015 was a rush. I was on the tail end of my batch at the Recurse Center, after having spent the previous quarter transitioning from an backend engineer to writing Swift every day. The goal was to regularly post extensions Iâve come up with, stumbled upon, or received from the community. In the span of a year, I collected 89 extensions, had countless conversations with the Core Team, and even represented the account at XOXO Festival. However, I (accidentally) treated Public Extension like a âForever Project.â Without an end in sight, the weight of âdo I just keep running this account ad infinitumâ caused missed days to turn into weeks, and weeks into a year of hibernation. My energy drifted elsewhere. Writing, building out Pelotonâs iOS team, and crafting memories with friends on two wheels. Until a couple of weeks ago, I had almost forgotten about the project when Joeâcommonly aliased as @mergesortâexpressed interest in taking the baton. I couldnât imagine a better fit. Not only is Joe a great friend who is 1000% game to volley Swift puns, but he also has a track record of helping, advising, and guiding folks in the iOS community. A lunch, repository transfer, and a few iMessages later, Public Extension has a second wind. Joe and I have discussed some of his early plans for the account and Iâm stoked. Please give him the same support and cheers yâall have kindly given me along the first leg of PEâs relay. Below are some notes about the transition and aspects that will remain the same: To prevent old commit permalinks from breaking, we transferred the repository to Joeâs GitHub account and I forked it. This means that all of the old tweets can safely be embedded. The account wouldnât have been possible without submissions. Going forward, all extensions will continue to link back to the original author, if applicable and with permission. On the note of submissions, they will still be accepted đ Joe can provide more details on preferred ways to do so.
Foundations
Every day at a startup is an exercise in getting to tomorrow. Some days itâs easier, some days itâs harder, but if you donât make it until tomorrow, there wonât be a next week, month, or year.
This is why building a long-term foundation is incredibly important.
Every day at a startup is an exercise in getting to tomorrow. Some days itâs easier, some days itâs harder, but if you donât make it until tomorrow, there wonât be a next week, month, or year. This is why building a long-term foundation is incredibly important. If you keep making it to tomorrow without thinking ahead, in three years you might look at what youâve built and realize that youâve set yourself up for mediocrity or failure. Every decision you make today is implicitly a decision youâve made for the next few years. Thereâs a question of whether youâll make it that far, and you can always change course from your decisions (and you should!), but itâs not without cost. Stop thinking about how to build a company that lasts, start thinking about how to build a great company that lasts. Start making decisions today that will help you build a great company. Because down the road you might look back and realize that your company isnât that great after all.
Dev Chats - Joe Fabisevich
This is an interview that I did with Sam Jarman, originally posted on his blog.
This is an interview that I did with Sam Jarman, originally posted on his blog. Introduce yourself! Who are you? Where do you work? Hey Sam, thanks for this interview! Iâve been reading your other developer interviews and am humbled to be in the same company as them. Iâm Joe Fabisevich, an iOS developer in New York City, with no specific ties to a company right now. I spent the first 5 or 6 years of my career working startups like Timehop, Betaworks, and Bitly. Last year I decided to take the things I'd learned and help other companies grow via consulting work. My job is to help companies get through whatever challenges they're currently face. I enjoy working most with early stage startups, so a lot of my work focuses around getting startups to their 1.0. Often times they're starting with nothing more than sketches and wireframes. Other times I help businesses figure out their mobile strategy, or work with iOS teams to grow their skills. This is especially interesting as iOS has recently moved to Swift, and there are few experts in the field so far. I wanted to add flexibility in my life, and now I'm able to tackle all different kinds of challenges. Not all of my work is even engineering related, which makes it fun to grow in many directions. Who or what got you into programming? It's a bit clichĂ©, but I got into programming somewhat by accident. In high school my grades were pretty poor, and I had to choose a major for my junior and senior years. My mom was a mainframe programmer and told me that if I took the computer science course, she would help me out. (Please donât tell my 11th grade AP Java teacher Mr. Turner). After about two weeks, she declared that she was completely out of her element, and that I was on my own. I was never a great rote learner, but I was always good with understanding patterns and systems. Programming lends itself to hands on learning, which made me finally feel good about learning. After some initial success, I was pretty hooked. As I got better, I was able to start helping others, which was also something I'd never been able to do. In college I majored in computer science, but rather aimlessly. When the iPhone came out, I really wanted to make an app for it with a friend, and so I mustered up my strength and did. The code was awful, and the UI was hideous by today's standards, but there were probably only 10,000 or so people building apps, so it was felt like quite an accomplishment. Since there was so little iOS talent out there, I was able to parlay that app into an internship. As I left school (after spending most of it building apps in class), I was able to turn that internship and subsequent apps into my first startup job. Whatâs the tech scene like in NYC? Any major upsides and downsides? The tech scene in New York City is quite a mixed bag, but I feel that way about most things in New York. Tech is not the top dog in this city, which has it's pros and cons. The biggest pro is that as an industry, we stay mostly grounded. We use our technology to lift up other industries, as well as tech for tech's sake. This helps us avoid a monoculture like you see in Silicon Valley. The biggest con is that as an industry, we stay mostly grounded. This means that we don't attract as much talent (though there's still plenty), or work on as many moonshot ideas as you'd see in Silicon Valley. Those moonshot ideas are the one's that grow to be Facebook or Airbnb, and affect our lives in an outsized manner. As a person, it's hard to say whether I would trade one for the other, but it's always a fun thought experiment. Youâve worked both for companies and for yourself â do you have a preference? What are the advantages and disadvantages? Like everything in life, there's pros and cons to everything. I don't have a preference, and don't think I'll always be a consultant, but don't think I'll always work full time either. Being a consultant gives me work-life balance that's hard to beat. It's quite an opportunity to use the extra time I have to invest in my own learning. I can spend more time reading about product, design, management, or even things like metacognition, which help me grow in my career. On the other hand there are some skills you can learn at a company over the long term. I still work at and with companies, but being with a company for a while helps you develop different skills. I tend to think of it as growing in a breadth-first manner vs. a depth-first manner. Both will likely get you to the place you want to be, but the path will look different. Ultimately what works best for you is in your hands. What has been your toughest lesson to learn in your software career so far? Iâd say the career matters. Developers often donât want to think about their career, and instead think about programming. It makes sense because it takes a while to become comfortable enough to feel confident in what youâre doing, but as a result other peopleâs other skills suffer. Iâve always been interested in product, design, and programming, so choosing a linear path was difficult for me. Nowadays I'm able to leverage those skills as a consultant and former startup founder. On the other hand, I hadnât spent much time thinking about management or a career track until recently, and realized that Iâve got little experience with that and now am playing catch up. Ultimately it may not matter because you canât predict your future, but it is very important to be in touch with your goals to move forward as much as possible, without parallel diversions. What would be your number one piece of advice for a successful software career? Everyone starts at the beginning and thereâs no way around it. Luckily, that's ok. Over the first few years of my software career I read anything programming related in sight that sounded interesting. Even if it didnât pertain to what I was currently working on, I would still read it. That ended up paying dividends as I started to expand my horizons. Even though I didn't understand it all, I had all this knowledge tucked away in the back of my brain for a later day. When it was time for me to try my hand at Python, I didn't need to start from scratch. When I wanted to build a website, it was as easy as recalling that tutorial I read a month ago. Better yet, I took the lessons I learned in other languages and frameworks, and applied them to what I was working on as I saw fit. This allowed me to grow in ways that my peers weren't, and made a more confident programmer. While this technique worked for me, I canât promise it will work for everyone. Everyone learns differently, and you have to find your path. What I will suggest though is trying it out and seeing if it works for you! Maybe it won't be the only way you grow, but it can be another tool in your toolbelt. What programming skill do you think is underrated? There are two personality traits which go hand in hand. Empathy and modesty can take you from being a good programmer to being a good teammate. Being a good teammate can transform you into a great asset. If you apply these traits, you'll more easily accept your flaws, and that will empower you to your co-workers and fellow community members. One underrated aspect of empathy and modesty is that over time you become more confident and humble. Confidence and humility allow you to turn anything into a learning opportunity. The more you can say âI donât know, but I want to learn about itâ either to yourself or to a peer, the more youâll open yourself up to an opportunity to grow. Over time it will become an innate part of how you approach solving problems. What books/resources would you recommend? Iâm going to be a little self-serving here, but I maintain a Pinboard tag where I post articles. The articles are ones that Iâve come across in my career that taught me something, with information I wish I knew when I was just starting out as a programmer. Sometimes I go back and re-read the articles, and I'll still pick up new things. As I've grown, the advice and stories take on new life and new meaning. I recommend going through the posts in reverse order. The first posts are more foundational and encourage bigger thinking. Iâm really big on RSS, so I recommend subscribing to the feed, so that way you can always get a new article right after itâs posted. Finally, make your shoutout! What would you like the readers to go have a look at? I think right now itâs important to not forget about others in need. There are people dealing with natural disasters that have entirely uprooted peopleâs lives. A cause that's near and dear to my heart is the Hispanic Federation, where every dollar will go to aid on the ground in Puerto Rico. You can choose your own cause, but the important thing is to do something. Personally you can find me on Twitter where I am busy saying anything that comes into my head, so my apologies in advance.
Debugging shortcuts for UIKeyCommand
I recently re-discovered UIKeyCommand
while listening to Caleb Davenportâs, podcast, Runtime. Heâs also got a blog post which shows you exactly how simple it is to create UIKeyCommand
shortcuts for your app.
After reading that, I decided that it would be neat to implement them across my app, so I could also start navigating around my UI with lightning speed while Iâm debugging in the simulator. I quickly realized that by using Swift extensions, I could automatically get these behaviors for free throughout our entire app.
I recently re-discovered UIKeyCommand while listening to Caleb Davenportâs, podcast, Runtime. Heâs also got a blog post which shows you exactly how simple it is to create UIKeyCommand shortcuts for your app. After reading that, I decided that it would be neat to implement them across my app, so I could also start navigating around my UI with lightning speed while Iâm debugging in the simulator. I quickly realized that by using Swift extensions, I could automatically get these behaviors for free throughout our entire app. Below is a code snippet which you can drop into your app to help you speed up your workflow. With just one tap on your keyboard, youâll be able to pop a UIViewController from a navigation stack and dismiss any presented UIViewController. extension UIViewController { open override var keyCommands: [UIKeyCommand]? { return [ UIKeyCommand(input: UIKeyInputLeftArrow, modifierFlags: [], action: #selector(popViewControllerWithKeyCommand)), UIKeyCommand(input: UIKeyInputDownArrow, modifierFlags: [], action: #selector(dismissViewControllerWithKeyCommand)), ] } } private extension UIViewController { dynamic func popViewControllerWithKeyCommand() { self.navigationController?.popViewController(animated: true) } dynamic func dismissViewControllerWithKeyCommand() { self.dismiss(animated: true, completion: nil) } } Donât forget, you can make your own default shortcuts too. Happy debugging!
WWDC 2016 â My Fantasy Edition
WWDC is right around the corner! This post isnât intended to be a prediction, as much as what I hope unfolds.
As Betrand Serlet, a former Apple engineer discussed in this 90 second video clip, Apple often ships features iteratively. Projects start off private, only to be used internally, often times for a year or two. When they feel stable enough, Apple opens them up to 3rd party developers, and makes it an official API. Features that are deemed noteworthy and successful continue to build on, while others are simply forgotten.
The three technologies below have gone through this lifecycle the last few years, and I think they are ready to converge into a big way, changing how we use iOS every day.
WWDC is right around the corner! This post isnât intended to be a prediction, as much as what I hope unfolds. As Betrand Serlet, a former Apple engineer discussed in this 90 second video clip, Apple often ships features iteratively. Projects start off private, only to be used internally, often times for a year or two. When they feel stable enough, Apple opens them up to 3rd party developers, and makes it an official API. Features that are deemed noteworthy and successful continue to build on, while others are simply forgotten. The three technologies below have gone through this lifecycle the last few years, and I think they are ready to converge into a big way, changing how we use iOS every day. Universal Links Since the first days of iOS, URL schemes were a way to take you from one app to another. You could provide some context with URLs like myapp://profile, but nothing more. Then iOS 8 finally began allowing developers to break out of apps. Apple started allowing developers to create extensions, little parts of your app that can run in another app. In iOS 9, Apple went even further down that route by adding Spotlight. This method of universal search combined with the NSUserActivity API allowed a developer to define entry points into their app. Most importantly though, Apple introduced âuniversal linksâ, real URLs like ones youâd find on the internet that would open a corresponding app instead of Safari. For example, if I sent you this Medium article in a text message and you had the app installed, it would open up in the Medium app, not a website. While a great idea, the implementation still left room for improvement, as users often get bounced into to an app without wanting or expecting to be. Remote View Controllers If youâve ever been in an app and wanted to send an email, Apple provides a way to pull up the Mail app without leaving the app youâre currently in. Apple lets developers open up this Mail view (MFMailComposeViewController for you nerds out there), to send messages from within another app. And so you have remote view controllers, screens from another app presented within your app. Currently, if you want an experience like this, youâd have to integrate an SDK or do a one-off partnership with a company. I think iOS 10 will finally bring this functionality to all 3rd party developers. Imagine how quickly you could post a tweet by pressing a tweet button within an app and having it present a Compose Tweet screen instead of opening the Twitter app. How about calling an Uber when youâre in Google Maps, Yelp, or Foursquare? The possibilities are endless. Implementing this can be made especially simple if you can just piggy back off the universal links that we mentioned before. Add a URL, and if the user has the app installed, it will present in your app without them having to go anywhere. Siri Having been a part of iOS for almost 5 years now, Siri has gone through a similar lifecycle as these other technologies. Initially, Siri was a concierge for Appleâs apps from setting reminders to making phone calls. Apple started adding additional partners like Yelp, Wikipedia, and HomeKit vendors. People have been saying it for years, and at this point the tech world is convinced that a Siri API is most certainly coming in iOS 10. I also believe Apple is ready to take this next step, and open it up to 3rd party developers. While I donât think we will have the ability to add Siri functionality into our apps, Iâm confident that we will be able to add our own app functionality into Siri. A likely implementation would be building queries that Siri can respond to by presenting the remote view controllers discussed above. Asking Siri to âfind me an Italian restaurantâ will pull up the remote view controller from Yelp, so you can satisfy those pasta cravings. Those who wish to dive into your appâs richer experience could use the NSUserActivity API and deep links, to have Siri launch you into the app in the exact place you wanted. Conclusion Whether my fantasy becomes a reality, I think WWDC is going to be huge. Iâm very excited, more so than I have been the last few years. If you see something like this Monday on stage at WWDC, I told you so. And if you donât, then just remember Iâve been wrong before, but that doesnât mean I wonât be right some day. đ
The Expressive Nature of Swift
Ignores commenting on another static vs. dynamic dispatch article because people wonât accept Swift is a hybrid not plain static.
â Joe Fabisevich đ¶đłâą (@mergesort) May 24, 2016
Ignores commenting on another static vs. dynamic dispatch article because people wonât accept Swift is a hybrid not plain static.
â Joe Fabisevich đ¶đłâą (@mergesort) May 24, 2016Guess that didnât last long.
Ignores commenting on another static vs. dynamic dispatch article because people wonât accept Swift is a hybrid not plain static. â Joe Fabisevich đ¶đłâą (@mergesort) May 24, 2016 Guess that didnât last long. Thereâs a conversation happening in the iOS community at the moment, static vs. dynamic programming. On one side we have many people who have been writing Objective-C for over 20 years (wow!) saying that the dynamism of Objective-C is the reason why it is an amazing language, and has succeeded. The argument is predicated on the fact that those nay-saying it donât understand the power of dynamism, and how itâs empowered programmers. On the other end you have many people saying that static languages are the way forward, and that a whole class of errors is avoided, and that we should look at all the bugs prevented by having a good type system! This back and forth ignores that Chris Lattner, the creator of Swift, has himself stated that Swift is a hybrid, not explicitly static or dynamic. His explanation is very interesting, because it takes the argument from being black vs. white and turns it into many gray shades. Other languages have explored these concepts before, with ideas like gradual typing, which was born out of the idea of grafting a type system onto dynamic languages, not making static languages more expressive. But what exactly is expressiveness? As this StackOverflow post explains (always cite your StackOverflow posts kids): âExpressiveâ means that itâs easy to write code thatâs easy to understand, both for the compiler and for a human reader. Two factors that make for expressiveness: âą Intuitively readable constructs âą Lack of boilerplate code Peter Norvig has a great talk on design patterns in programming languages. One slide stuck out to me as I was reading it recently. Letâs break that down: There are fewer design patterns in expressive languages, because the type system does not prevent programmers from trying to express a concept. Dynamic languages by their very nature of a weak type system have less issue being expressive. This does not rule out static languages from being expressive! The lack of expressiveness of static languages is dogma attached from other static languages that have existed before. Iâd argue that Go is as expressive as Python, and Swift, even in its incomplete state, is nearly as expressive as many dynamic languages. You can recreate the advantages Objective-C offers through its dynamic nature by using different expressive techniques, like protocols and generics, in a statically typed language. One more thing: Many arguments imply that Apple hasnât thought about writing apps, that they built a static language, and forgot to take into account. Care to tell me which company writes apps on the most iPhones in the world? Thatâs right, Apple. I donât think theyâre stupid enough to create a language which they believe is objectively worse for writing apps. Regardless of how this whole static vs. dynamic âconversationâ turns out, one thingâs for certain, Iâm #TeamSwift.
You Canât Do It All
At the original iPhone announcement, we saw Steve Jobs on stage with Googleâs then CEO Eric Schmidt, showing off Googleâs amazing Maps. Built for the iPhone, it was something weâd never seen before. Appleâs incredible phone and revolutionary software combined with Googleâs terrific web services and data coming together for one amazing product. With regards to collaboration, itâs all been downhill from there. Since then, every tech company has focused on owning the whole experience.
Apple, Google, Amazon, Microsoft, and Facebook. These companies all excel at some things, and have attempted to leverage that into more. Apple understands user experience and hardware. Google gets web and machine learning like no other company. Amazon is the best at logistics and commerce. Microsoftâs productivity and enterprise know-how guides them to success. Facebook has little competition when it comes to figuring out consumer behavior.
In the mobile era, each of those companies has tried to make the widget, sell it, and reap all of its rewards. But this has never worked.
At the original iPhone announcement, we saw Steve Jobs on stage with Googleâs then CEO Eric Schmidt, showing off Googleâs amazing Maps. Built for the iPhone, it was something weâd never seen before. Appleâs incredible phone and revolutionary software combined with Googleâs terrific web services and data coming together for one amazing product. With regards to collaboration, itâs all been downhill from there. Since then, every tech company has focused on owning the whole experience. Apple, Google, Amazon, Microsoft, and Facebook. These companies all excel at some things, and have attempted to leverage that into more. Apple understands user experience and hardware. Google gets web and machine learning like no other company. Amazon is the best at logistics and commerce. Microsoftâs productivity and enterprise know-how guides them to success. Facebook has little competition when it comes to figuring out consumer behavior. In the mobile era, each of those companies has tried to make the widget, sell it, and reap all of its rewards. But this has never worked. Amazon bought a mobile mapping company. Apple has tried to copy Microsoft Office. Google has made not one but two OSâs, a social network, and probably eight messaging apps, Iâve honestly lost count. And the list goes on. The Roman empire fell because it was too large to maintain (sure, there are other reasons too⊠but letâs move on). No company can be the best at everything, and the quicker some companies realize that, the more handsomely they will be rewarded with opportunities to partner with others. In programming, we have the concept of the Unix philosophy. Itâs the idea that you build a large and complex program by combining many single-task apps that do one thing, and do it well. Unfortunately that runs in contrast to what weâve seen in the tech world, because thatâs not what the landscape encourages. The Unix philosophy is as close as weâve come to a successful implementation of distributism, and thereâs no way thatâs happening. Weâve seen it work with things like federated messaging and interoperating protocols, but none have lasted long enough before a company tries to create an integrated experience around open standards. Itâs hard for one company to excel at user experience, hardware, machine learning, web services, enterprise, social, and more, when each of those has different incentives, customers, and end users. If thereâs anything that is Appleâs (or anyoneâs) ultimate demise, itâll be spreading itself so thin across what the company does, that they wonât be able to fight the war on all fronts. As the saying goes, âthe enemy of my enemy is my friend.â In the past weâve seen companies partner together to take on one competitor. Itâs not as black and white as Google hates Apple, and Apple hates Google. It canât be when Google is paying a billion dollars to Apple every year to be the default search engine, and when iOS is more profitable to them than Android. Itâs more like Apple uses Google when itâs opportune, and Google uses Apple when itâs in their best interest. Politics make strange bedfellows. The only reason Iâve become a bit bearish (just a bit) on Apple is that theyâve yet to prove to me that they can own the user experience and have the expertise necessary to excel in all the domains theyâre entering. But Iâm a man who loves to be proven wrong, and theyâre a company whose proven doubters wrong many times over.
Comparing Shyp to Time Warner
I pushed the magic button to get a Shyp person here to send out a couple packages I have been procrastinating sending for weeks. After that was settled, I dialed up Time Warner Cable, to get my modem swapped out, a process Iâve been actively trying to get done for weeks.
After a combination of pushing buttons and yelling âtech supportâ into the phone for 10 minutes to get me to the right place, I finally got a real life human on.
I pushed the magic button to get a Shyp person here to send out a couple packages I have been procrastinating sending for weeks. After that was settled, I dialed up Time Warner Cable, to get my modem swapped out, a process Iâve been actively trying to get done for weeks. After a combination of pushing buttons and yelling âtech supportâ into the phone for 10 minutes to get me to the right place, I finally got a real life human on. I wonât bore you with the details, but half way through my call I got a call from the Shyp guy to tell me he was downstairs, and wanted to come up. I told him sorry, I was on the phone with Time Warner, but he didnât seem to mind, and 60 seconds later I had given him my stuff and had gotten back my tracking number. Another 10 minutes later and 2 holdâs later I was off the phone with Time Warner. Guess which company I liked dealing with more?
The Apple Watch and Luxury
Just leaving this here for later, feel free to call me out on it if Iâm wrong.
The ïŁżWatch isnât about Apple selling luxury products, itâs about making something nice looking enough that youâll actually wear it.
Just leaving this here for later, feel free to call me out on it if Iâm wrong. The ïŁżWatch isnât about Apple selling luxury products, itâs about making something nice looking enough that youâll actually wear it.
On Slow Programming
This has been a recurring theme for me in 2014. https://t.co/H613AEUvwj
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014
People have asked how I make changes so quickly to my code, it's because I've made it so it can be changed quickly.
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014
Build what you have to build, then make it more generic so you can use it again without the code gaining entropy.
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014
As a bad math student, the best advice I got was to solve for one scenario, solve it for another, and only then try to connect the dots.
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014
This has been a recurring theme for me in 2014. https://t.co/H613AEUvwj
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014People have asked how I make changes so quickly to my code, it's because I've made it so it can be changed quickly.
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014Build what you have to build, then make it more generic so you can use it again without the code gaining entropy.
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014As a bad math student, the best advice I got was to solve for one scenario, solve it for another, and only then try to connect the dots.
â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014This has been a recurring theme for me in 2014. https://t.co/H613AEUvwj â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014 People have asked how I make changes so quickly to my code, it's because I've made it so it can be changed quickly. â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014 Build what you have to build, then make it more generic so you can use it again without the code gaining entropy. â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014 As a bad math student, the best advice I got was to solve for one scenario, solve it for another, and only then try to connect the dots. â Joe Fabisevich đ¶đłâą (@mergesort) December 1, 2014
Unbundling, DuckDuckGo, and Native Advertising
I tweeted earlier, comparing DuckDuckGo to Google when searching for the term âGo 1.4 betaâ, and how the first 50 results (I got bored scrolling and didnât go further, no pun intended) on DuckDuckGo didnât even have one mention of the language. Gabriel Weinberg being the good founder that he is (he seems very smart, and I highly respect what heâs doing) replied asking for more examples of things queries that he can investigate to improve, so I figured Iâd use the opportunity to leverage my thoughts, instead of the discussing the symptoms.
I tweeted earlier, comparing DuckDuckGo to Google when searching for the term âGo 1.4 betaâ, and how the first 50 results (I got bored scrolling and didnât go further, no pun intended) on DuckDuckGo didnât even have one mention of the language. Gabriel Weinberg being the good founder that he is (he seems very smart, and I highly respect what heâs doing) replied asking for more examples of things queries that he can investigate to improve, so I figured Iâd use the opportunity to leverage my thoughts, instead of the discussing the symptoms. DuckDuckGo performs better with general queries, because they have less personalized meaning. Thatâs not to say that it performs better than Google, but better relative to itself on user-specific queries. Over-generalizing, when a person is searching, they either want the general overview (e.g. a Wikipedia article, some solution to a problem theyâre having, etc), or are looking for information about the query, in a manner specific to them. When they look for something specific, searches have an implied meaning to the user who is searching. Intuitively Google performs better, garnering more accurate results inferring meaning from the rich history they have of you, through their various products. The main issue I have is that DuckDuckGo seems poor at inferring implied meaning with their lexical parsing of the terms. As an experiment, if you try in a fresh browser, not logged in, in private mode, Google will still hands down return better search results. But letâs imagine a world though where Google is just another search engine, the best at searching general results, but not great at anything specific. That world is here now. I propose that the answer to Googleâs search dominance isnât building a better Google, itâs unbundling via specialized search engines. Pinterest, Stack Overflow, and others are better equipped to understand the meaning behind your searches, based on their knowledge of you, but more importantly the rest of their ecosystem. The shift from desktop to mobile has amplified this behavior. A general search engine is not always available. Itâs been interesting to watch peopleâs usage patterns change. They think of places like Tumblr as a search engine for creativity and exploration, something that Google is not. Twitter is the place to go if you need to know about something happening right now. Amazon is where you go to fill your consumer needs. Nowadays, for any of those, you can open an app, search, browse and discover, as opposed to having Google lead you to your content. When youâre searching for how to decorate a home on Pinterest, they can quickly understand that the life event that has or will occur, or what youâre aspiring to. This is a prime opportunity to start marketing new blinds, cleaning supplies, whatever can help you in your journey. A lot of this has been brewing in my head for a very long time, but recently I've started to feel like thereâs more at play than just search, a vision of how weâll use the internet soon. Not a world where ads arenât annoying banners, but actually useful content. Currently you arenât seeing that ad for new blinds because ads treat you like youâre in the wrong place at the wrong time, with robotic reasons for what they choose to display. Itâs rather difficult to capture your intent and translate it into a sale unless youâre explicitly looking to buy something (which is a place where Google excels). Leveraging that knowledge is the dream of direct ad marketers and brand advertisers, and could actually provide you with benefit in your life. tldr; Search is hard, really hard. A general search engine is the best answer we have for the internet, for now. Ads are annoying, but they donât have to be. Thereâs so much more to be written, this is only the tip of the iceberg. As always, Iâd love to discuss.
Sapphire
Iâm starting to think that the ramp up in sapphire production from Apple isnât about a screen, but TouchID sensors on every Apple productâŠ
Iâm starting to think that the ramp up in sapphire production from Apple isnât about a screen, but TouchID sensors on every Apple product. Even their future wearable/payment/unicorn, whatever it may end up being.
The iPad probably isnât dying any time soon, probably
The internet decided last week, the iPad is dying. Too bad, I really enjoyed using mine. Well, not really for the first 4 years, but when Apple released the iPad Mini, it became my computing partner. Supposedly the Mac is sitting in the corner rapping âdonât call it a comeback.â
It depends on what you want out of a computer, and yes, Iâm calling the iPad a computer.
The internet decided last week, the iPad is dying. Too bad, I really enjoyed using mine. Well, not really for the first 4 years, but when Apple released the iPad Mini, it became my computing partner. Supposedly the Mac is sitting in the corner rapping âdonât call it a comeback.â It depends on what you want out of a computer, and yes, Iâm calling the iPad a computer. When Iâm at home and not working (the rare moment), sometimes I want to read some tweets on the couch, write a Tumblr post in bed, and listen to some music while I shave. At this, the iPad excels. People seem to be forgetting that while the software of iOS on the iPad is underpowered compared to a Mac, itâs also a whole lot more enjoyable to use. (Not to say it canât use some more power) I suspect a lot of people are like that, and Apple knows that. There are a lot more than the kind of person who rotate between Sublime, Xcode, and terminal while trying to launch a startup. And then once in a while, I get to take out my iPad and slow it down a bit. The best computer is the one you have with you, and the iPhone is great, thereâs nothing to say that the iPad canât replace a Mac or PC for most consumers, and that is why I call it a computer.
A bigger iPhone
With rumors swirling about a bigger iPhone 6, I figured Iâd touch on them (haha, puns). A few years ago, I thought the idea of a larger iPhone would be something I disliked. With my small-ish hands, even the iPhone 5âs 4 inch screen is too large for me to reach the back button in the top left corner of the screen with one hand.
But now, since iOS 7, I donât fear a bigger phone, I would even welcome one.
With rumors swirling about a bigger iPhone 6, I figured Iâd touch on them (haha, puns). A few years ago, I thought the idea of a larger iPhone would be something I disliked. With my small-ish hands, even the iPhone 5âs 4 inch screen is too large for me to reach the back button in the top left corner of the screen with one hand. But now, since iOS 7, I donât fear a bigger phone, I would even welcome one. iOS 7 is very gesture driven, and has added edge gestures, which allow a user to go back a screen by swiping from the off the screen, on the left side of the phone. These are accessible to developers with one line of code. I really love using my iPad mini, but do wish that something I carry around with me all the time would be a little more pocketable.
Go for Objective-C developers
Iâve been doing Objective-C for almost 5 years (woo!), so at this point I think I have a better understanding than most of Appleâs motivations and intentions, with relation to building the language.
That said, recently Iâve been loving working with Go, and thereâs a few reasons for that.
Iâve been doing Objective-C for almost 5 years (woo!), so at this point I think I have a better understanding than most of Appleâs motivations and intentions, with relation to building the language. That said, recently Iâve been loving working with Go, and thereâs a few reasons for that. Not traditionally object-oriented With the rise of ReactiveCocoa, Iâve been thinking about what programming principles might work for UI-driven frameworks. Go is not traditionally Object-Oriented. You cannot inherit your Cat class from Animal, but you can anonymously embed an Animal into your Cat, so it gets all the traits of Animal. Thatâs because you donât have objects, you have structs and interfaces. Interfaces are functions that act on structs. This doesnât sound quite that different than OO methodologies, but itâs a big distinction when thinking about how to construct your software. Gothic (go-like) programming seems like it would be a great style for people looking to explore signal-driven frameworks, which Go is great for. Type inference UIView *view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 320, 480))] I write that dozens of times per day when doing iOS development. In Go, it would look like this: view := UIView{CGRect{0, 0, 320, 480}} Considering how often I write that, I would love type inference to clean up my code. Type inference is the biggest reason writing in Go feels like working with a scripting language. Garbage collection Let me start off by saying, ARC is amazing. I think what Appleâs done with LLVM, and what itâs enabled is one of the best things Iâve seen in my short career. That said, not having to worry about ARC not cleaning up properly, or where to use a strong vs. weak reference does get tenuous. If software development is about reducing mental strain on a programmer, then garbage collection is something that goes a long way to that. Native concurrency Go handles concurrency in a few ways. The simplest is to just stick the go keyword in front of your method, and it will run it asynchronously. doSomethingAwesome // Runs synchronous go doSomethingAwesome // Runs asynchronous The second is channels. As an Objective-C developer you can think of channels similarly to NSNotifications. You pass values to a certain channel, and it responds accordingly, as youâve set it up to respond. One nice thing is that unlike NSNotification itâs statically typed, because this mechanism is built into the Go language. Channels also can talk in both directions, so you can pass messages back and forth along a channel. package main import ( "fmt" "time" ) func main() { ok := make(chan bool, 1) go doSomething(ok) fmt.Println("Waiting") <-ok fmt.Println("Did something") } func doSomething(ok chan bool) { time.Sleep(time.Second) ok <- true } I donât know about you, but Iâd much rather be doing concurrency this way rather than thinking about what thread to run a function on. Packages One thing that Objective-C has struggled with for 30 years is namespaces. JFMyClass, AFNYourClass, THOSomeOtherClass! All this prefixing is done to avoid collisions. The accepted practice is to now prefix your classes with 3 letters, because that will solve everything obviously. If your implementation of a class has a method doSomething, and yours does as well, with Objective-Câs dynamic runtime there is no way to know when your version will be ran or mine will. Go solves that in the classic way, with packages. Packages can be built into static libraries, which get put into your go directory (where all libraries are stored on your computer). Go as a tool Go has built terrific tooling into the language's standard offerings. go get go get fetches remote repositories and installs the packages and dependencies. In fact, you can even import from a url, like import âgithub.com/go-sql-driver/mysqlâ and have your MySQL driver ready to go when you compile your application. go fmt Isnât it awful when you use BSD style brackets, and your coworker uses Allman, and you want to use K&R? Go only has one style. You run the go fmt tool, and it automagically converts all the brackets, new lines, and everything else to one standard go format. Most IDEâs have built in support which runs the go fmt tool when you save the file, so your code always looks right. go test Tests are built into the language. Tests live in the same package, so you donât have to worry about exposing any variables just for your test casesâ sake. go be happy! This one is just my personal advice. Go isnât the worldâs most perfect language, but itâs one of the biggest advancements for software development principles in a while. Thatâs why Iâm excited about it, and Iâd implore you all to try Go! As usual, if you have any feedback feel free to leave a comment for me @mergesort or on this Hacker News thread.
Come See Me
It's short notice, but Iâm giving a talk Stony Brook University tonight about How Startups Fail. So if you find yourself in the middle ofâŠ
It's short notice, but Iâm giving a talk Stony Brook University tonight about How Startups Fail. So if you find yourself in the middle of Suffolk County, come watch!
Writing Go Code
All day every day.
All day every day. if err != nil { } if err != nil { } if err != nil { } if err != nil { } if err != nil { }
The Company Iâm Watching in 2014
There's one company I've got my eye on in this new year. I don't necessarily expect them to succeed or fail, but do think that this will be a pivotal year in their history.
There's one company I've got my eye on in this new year. I don't necessarily expect them to succeed or fail, but do think that this will be a pivotal year in their history. Microsoft 2014 is shaping up to be a make or break year for Redmondâs finest. The tent poles of the company being attacked on all fronts, like the Roman Empire. From the low end, from the high end, in casual spheres, and business, Microsoft is on high alert. But have they sunk too low to be picked back up? To answer that we have to look at what Microsoft has to offer. CEO It all starts at the top. Microsoft will look to replace itâs âcharismaticâ leader in 2014. They will need to find their CEO, and the direction. From there only can they decide where to focus their efforts. This may end up being the biggest decision in their companyâs history, not having one of their founders be at the helm of Microsoft. Windows Phone Windows Phone is seemingly showing some life recently. I know, not hard to rise 156% from next to nothing, but itâs something. Windows Phone is probably the most interesting product at Microsoft. So interesting there are talks about using itâs platform in other products. Windows Phone is a clean break from what Microsoft has traditionally done. A consumer focused, limited computing experience for getting in and out. Theyâve had a lot of trouble getting traction with developers, but it still is worth commending. Hopefully they can take lessons learned from here and apply them elsewhere. Tablets The Surface was undoubtedly a flop. Maybe the Surface 2 is relief, but I donât think itâs likely. The Surface 2 is the new desktop in a mobile world, not a tablet. Microsoftâs ethos of computers enhancing your work, not about enhancing your life shines through in the Surface. Take your tablet everywhere you go, and when you get to work you can dock it with a mouse, keyboard, and monitor. So far consumers have voted with their wallets and attention to say this is not how they want to behave in a mobile computing world, but time will tell. Desktop Whatâs a desktop? Oh, you mean the iMac thingy that Apple sells, and every other company loses money on. Well at least some people still have fun building them. Xbox Itâs scary to think that Xbox might have lost the console war out the box (pun intended), but they seem to have done well recovering. Xbox is an interesting place, because it along with Bing is where a lot of R&D happens within Microsoft. Office Uh oh, everyoneâs building an Office competitor and just giving it away. The long entrenched Google Docs still works. It might be frustrating sometimes, but it offers a world class collaboration tool. Now an Apple ID, gives you access to Pages, Keynote, and Numbers in the cloud (though I think any savvy consumer would be weary of trusting Apple with a web service). An iOS device you get it for free on the iPhone and iPad. It comes free in OS X if you purchase any Mac. To use their language, itâs going to be a tough value proposition for consumers to say to Microsoft, âgive me good money for the Office suiteâ, when good (not Open Office, ok?) alternatives exist. Even if competitors donât match up feature for feature with Office, they get you 80% of the way there for 0% of the cost. Windows Turns out, 1 paragraph later itâs still really hard to compete with free. Apple is just giving away itâs operating systems like theyâre Oprah). Itâs a boon to developers who get to make things with the latest technologies, Apple who gets to keep the platform moving forward, and consumers who always have the latest and greatest. Google updates Chrome OS behind the scenes, protecting users against vulnerabilities, and giving them access to new technologies. Iâd imagine would do anything they can to get Android to follow that model. As for Microsoft, itâs hard to keep the ship afloat when you donât know where youâre rowing. Are we making a desktop OS, are we making a tablet OS, a phone OS? Conclusion Look for a lot to happen in Redmond in a very short time. If they donât do a lot, theyâre doomed as theyâll remain stagnant. If they cut too much, theyâre probably destroying the foundation of what makes them Microsoft, and may lose a lot of support. Saving Microsoft (Iâll go as far as to say they need saving) will need to be a surgical operation in a land where competitors operate so nimbly.. Speculating on what will happen seems like a fools errand, so Iâll take a pass on that. Pass the popcorn, letâs sit back and watch.
Five Reasons to be and not be a developer in New York in 2014
The Good 1. You canât just get funding for any old idea.
Being the financial capital world means that people are wary of giving money to stupid ideas. Ok, ok, less wary, but it still happens. But New York is very grounded with respect to technology, and that gives me [some] confidence in the ideas that are being funded here.
The Good 1. You canât just get funding for any old idea. Being the financial capital world means that people are wary of giving money to stupid ideas. Ok, ok, less wary, but it still happens. But New York is very grounded with respect to technology, and that gives me [some] confidence in the ideas that are being funded here. 2. Itâs New York City. Itâs really hard to beat New York City, if urban life is your thing. For me, itâs really hard to beat. Thereâs a neighborhood for every kind of personality. Iâve lived in Queens, the Lower East Side and Upper East Side. Iâve worked in the Financial District, Flatiron, and Meatpacking. Heck, if Brooklyn is your kind of scene, the start up community is pretty big over in Dumbo. Cough, cough, Etsy. 3. Itâs easy to find work.** Thereâs a shortage of developers. No question about it. Every company Iâve been at, and many Iâve been around gone on about how difficult it is to find developers, especially in mobile. Really hard even, so much so that people move their companies out to SF to find them. 4. Winter. Snow. Itâs really pretty. Iâve heard people out in the bay say they miss seasons, and I can say thereâs nothing like watching the leaves change color down the streets you walk every day. If youâre a fan of seasons, New York has them, sometimes even three or four in one day. 5. Itâs a quickly growing community. Over the last few years, Iâve watched the tech community grow from the same few people at most meetups, to hundreds of meetups with just a few familiar faces at each. The Bad 1. You canât just get funding for any old idea. If youâre an entrepreneur who wants to take a shot in the dark on some wild idea, itâs probably going to be harder money early on. 2. Itâs New York City. You might not like the city. Or you might like SF more. Or you might just hate gentrification, Bill de Blasio (itâs ok, some New Yorkers do too), and something else that makes New York, New York. I can respect that, itâs not an easy city to live in if itâs not the kind of lifestyle youâre looking 3. Itâs easy to find work. The corollary to this is that getting people to stick around will be harder. Developer mobility is high because they know theyâre in demand. In SF, where there is a lot of supply, itâs not as big a problem, but in NYC, Iâve seen replacing a developer (or worse, developers) basically shutdown companies. 4. Winter. Itâs 20 degrees as I write this, and I donât want to leave my house⊠Ever. Edit: itâs now 9 degrees, end my misery⊠5. Itâs still growing. Sometimes youâll find people who just want to be in there because itâs the hot thing. Sometimes you want a kindred spirit, someone who understands when to use a b-tree, and when to use a map, not just a pixel pusher (as an ex-coworker once angrily described GUI development).
New Years Resolution - 2014 Edition
Last year I spent a lot of time gathering pieces, and in 2014 I'd like to put them together.
Last year I spent a lot of time gathering pieces, and in 2014 I'd like to put them together. More server-side programming Iâve been doing iOS now for almost 5 years now. I always want to work on the whole stack, but I never have found myself drawn to a language thatâs useful on the server. Recently Iâve fallen in love with the power and design of Go. Take what Ruby offers you, with web frameworks, a driven community, and the clarity of Python.1 Now add static typing, with Robert Griesemer, Rob Pike, and Ken Thompson behind it, and you have Go. More design In 2013, I had the pleasure of working with a few amazing designers. Luckily one of them let me pester him relentlessly with every inane question and stupid idea I had. In 2014, the fruits of his labor should come to bear in my work. More hats to wear I donât mean this in the classic startup way, though best part of working at startups. Over the last year Iâve come to face it, Iâm going bald. Itâs pretty noticeable, my hairline is receding like glaciers in the North Pole. I decided to buzz it all off one day in November, and havenât looked back since. Of course it was 25 degrees the next night, so I did regret the timing. But back to hats⊠Iâve always avoided wearing hats because I knew my time with thick hair wasnât long for this world, and wanted to squeeze what I could out of it.2 What I didnât expect was the white hairs in my beard⊠But itâs ok, Natalia thinks itâs endearing. Do good, not just well More on this one later, but I canât wait to show you what I have in store for 2014. Here's to a good 2014!⊠If you want a commitment to clarity, just look no further than the go format tool.â©â© Wearing hats supposedly prevents oxygen from getting to the scalp, weakening hair follicles.â©â©
New Years Resolution Advice
Eat as much as you can tonight so you can make your New Years resolution to lose weight easier!
Eat as much as you can tonight so you can make your New Years resolution to lose weight easier!
Evomailâs Privacy Policy
As I switched over from Gmail to Fastmail.fm, I was looking for a mail client to replace my beloved Mailbox on iOS. I would have loved to continued using it, but it only supports Gmail, and not regular IMAP, so I needed a new client. Since I now have come to rely on the snooze and reminder features that Mailbox offers, I wanted to find an app that best matched that experience.
My reasons for switching from Gmail to Fastmail are similar to most, the whole privacy/advertisement debate that most in the tech industry pretend to care about most of the time, and some few paranoid folks like me actually think of. As a result, before deciding on which mail app to switch over to, I carefully read through the privacy policies of each application. I understood that in all likelihood my data would now be stored on someone other than Fastmailâs servers, and wanted to see if anyone had any alternatives.
I came to one that struck me as reasonable enough to trust, and that was Evomail. From their terms of use, I found two sections titled Caching and Deleting Data.
As I switched over from Gmail to Fastmail.fm, I was looking for a mail client to replace my beloved Mailbox on iOS. I would have loved to continued using it, but it only supports Gmail, and not regular IMAP, so I needed a new client. Since I now have come to rely on the snooze and reminder features that Mailbox offers, I wanted to find an app that best matched that experience. My reasons for switching from Gmail to Fastmail are similar to most, the whole privacy/advertisement debate that most in the tech industry pretend to care about most of the time, and some few paranoid folks like me actually think of. As a result, before deciding on which mail app to switch over to, I carefully read through the privacy policies of each application. I understood that in all likelihood my data would now be stored on someone other than Fastmailâs servers, and wanted to see if anyone had any alternatives. I came to one that struck me as reasonable enough to trust, and that was Evomail. From their terms of use, I found two sections titled Caching and Deleting Data. Caching: We do store metadata and on occasion full encrypted contents of your emails on our servers. This isnât permanent storage, but rather cached contents to deliver a better user experience to you, our customer. Deleting Data: We do delete your account and all cached email contents from our servers when you delete your account in Evomail. Unfortunately, it seems they are not true to their word. Like most modern mail clients, Evomail offers push notifications when you receive a new email. I set up multiple email accounts on the device. A few days later, after getting many happy pushes (ok, itâs email, I hated them) I went over to the Settings application and found Evomail, and a toggle called Reset Local Database. I flipped it, went back to the app, and saw that all my accounts had been reset. Too lazy to enter my credentials in again, I used iOSâs default mail app for a few hours. But I noticed I was still getting push notifications from Evomail, telling me that I had new emails. If they delete my account and all cached contents from their servers, how was I still getting push notifications? Iâm willing to hear them out before striking down my proverbial gavel, but Iâm not really sure what technical reason they could give for this.
The Fast Food Strike
If you canât pay people a reasonable wage and stay open, you should reassess your business.
End of a Chair-a
Sophomore year of college, my friend Mike has an idea to build new chairs. Not just any seats though, take the chairs out of a 1985 Pontiac Fiero, and make regular desk chairs out of them. Being sophomoric, I have no reasonable objection, so he starts scouring Craigslist. We find a guy in Jersey selling two Fiero seats and decided to drive out there.
Sophomore year of college, my friend Mike has an idea to build new chairs. Not just any seats though, take the chairs out of a 1985 Pontiac Fiero, and make regular desk chairs out of them. Being sophomoric, I have no reasonable objection, so he starts scouring Craigslist. We find a guy in Jersey selling two Fiero seats and decided to drive out there. We pull up to his house. I should say shack. It was like Deliverance, but in rural Jersey. It didnât matter. He greeted us, got his $65 and helped us load them in the back of a Honda Accord (no simple feat). They were covered in velour. They had speakers in the head rest. They were badass. We got back to the city, and went to Office Depot. We found the cheapest wheeled chairs we could find, and threw out the top half. We found planks of wood, and bolted them onto the bottom of the Fiero chair. We took the wheeled bottoms and bolted them onto the piece of wood. Flash forward to today. Iâm throwing out this âwonderfulâ contraption. After the sharp edges on the wood have cut my foot numerous times. After the lever has busted, making the chair sink every time itâs sat on. After it clogging up my room and serving as a good night stand because god help you if you sat on it. Good luck in your next life Fiero chair, you were great at being just not useless enough for me to not throw you out for 4 years.
Not Done
Weâre not done, but the new Done Not Done, coded by yours truly, is up in the App Store. An app to keep track of the movies youâve seen, music youâve listened to, and books youâve read, and for all the things you want to do.
Itâs getting some much needed performance love before we resubmit (the whole thing went from nothing to done in 6 weeks).
More coming soon, but go and get it.
Weâre not done, but the new Done Not Done, coded by yours truly, is up in the App Store. An app to keep track of the movies youâve seen, music youâve listened to, and books youâve read, and for all the things you want to do. Itâs getting some much needed performance love before we resubmit (the whole thing went from nothing to done in 6 weeks). More coming soon, but go and get it. Editors note: Betaworks shut down the app 2 weeks after we released it. I guess it actually was done!
âFriendsâ
Update: I quit using Facebook altogether about a year after this post was written. I really enjoyed using Facebook as described below, the product was actually quite pleasant, but it didn't provide me with enough value to overcome the moral issues associated with the company.
I also wrote about how I stay in touch with friends, and how it's served me better than Facebook.
Update: I quit using Facebook altogether about a year after this post was written. I really enjoyed using Facebook as described below, the product was actually quite pleasant, but it didn't provide me with enough value to overcome the moral issues associated with the company.
I also wrote about how I stay in touch with friends, and how it's served me better than Facebook.
My primary motivations for unfriending 360 Facebook friends was pretty simple; I just didnât like to be on Facebook anymore. I posted three times in the last year. When I released an Unmentionables, I wanted to take advantage of social. When I wanted to put something on Bondsy for my entire network to see, more social pressure. And of course, to put up a picture of me wearing a sombrero. I didnât care about the day to day particulars from most of the people I was friends with, and every time I went on it made me feel apathetic towards Facebook. Itâs silly for me to be paying for the mistakes that a 16-year old version of me made.
Itâs been about 3 weeks, and so far, I like Facebook again.
Update: I quit using Facebook altogether about a year after this post was written. I really enjoyed using Facebook as described below, the product was actually quite pleasant, but it didn't provide me with enough value to overcome the moral issues associated with the company. I also wrote about how I stay in touch with friends, and how it's served me better than Facebook. My primary motivations for unfriending 360 Facebook friends was pretty simple; I just didnât like to be on Facebook anymore. I posted three times in the last year. When I released an Unmentionables, I wanted to take advantage of social. When I wanted to put something on Bondsy for my entire network to see, more social pressure. And of course, to put up a picture of me wearing a sombrero. I didnât care about the day to day particulars from most of the people I was friends with, and every time I went on it made me feel apathetic towards Facebook. Itâs silly for me to be paying for the mistakes that a 16-year old version of me made. Itâs been about 3 weeks, and so far, I like Facebook again. Iâve always said that Facebook is the worldâs best rolodex. I can always contact everyone Iâve met and cared to friend. But what if my attitude changed to, I can always contact everyone Iâve met and think I would want to contact? So many people have taken different courses in their lives since Iâve had relationships with them, and Iâm not particularly interested in the people that they are now. Thatâs not to say I donât wish the best for them of course, but I have friends from high school who now put up pictures of their kids. I have friends from elementary school who I literally (actually literally) havenât spoken to for two thirds of my life (and that gap will only get larger). I have friends who I met at a party once, who were a friend of a friend, and I really know nothing about them, but there they are in my feed. Why should I even bother? Thereâs only so much room in my head. Facebook is also a great event planning tool. Go through the list of your friends, pick a place, pick a time, and youâre done. Everyoneâs on the network, so you have the entire selection of your friends. Facebook has a list of basically every place, and if not, just put it in the details of the event. Facebook will even be so nice as to give you a weather forecast for the time of the event. But again, Iâm not inviting people who I havenât spoken to in years to my birthday party. Besides that, I donât feel much attachment to what Facebook has to offer. So how did the unfriending go? Iâm down to 85 friends. The first run of unfriending was going through my list and just hitting the unfriend button. I was very critical, basically saying, "if I havenât thought about you in the last 6 months, Iâm just going to unfriend you." I wrote down the name of everyone I unfriended into a text file, in case I ever did have the need to message them. I got the majority of folks out of my feed that way. I then ran through the list a few more times, making sure I didnât miss some people, including Facebookers who I may have been a bit too lenient with the first time around. Lastly, whenever I went on Facebook and saw someone who was still in my feed that I should have unfriended, I didnât second guess it, and just did it. So how do I like it? Iâve long been a believer that Twitter is great because your feed is a reflection of who youâve chosen to follow. In the end, there isnât a reason why Facebook shouldnât be the same way. The only thing stopping this is the social pressures that are put on by the two way relationship that a Facebook friendship is. The number, 85 friends, doesnât seem like a real coincidence to me. I didnât go into this with a number in mind, but my guess is that itâs somewhat related to Dunbarâs Number. I just donât have room in my head for all the people I had friended on Facebook before. Iâve turned off push notifications for everything except messages. I added email notifications for new events to make sure I donât miss that activity entirely. I just donât feel like I need to know about everything going on in my circle every minute. My feed feels more relevant. I donât feel overwhelmed. I actually look forward to going on sparingly for a few minutes a day. Overall, Iâd recommend it. Even if you just intend to prune a little bit, and not just unfriend en masse, your Facebook feed will feel more personal.
A Long Overdue Postmortem
Last night I pulled the first application that I ever wrote from the App Store.
Craig Glaser and I (mostly Craig) came up with the idea of creating heat maps for players in MLB. We thought it would be a cool visualization, and were convinced we could sell thousands of copies and be App Store rich.
I took to writing it. I took to rewriting it. I took to rewriting what I rewrote, only to discover, hm, Iâm not a very good programmer. Objective-C being so foreign didnât help⊠But in the end, it got done.
Last night I pulled the first application that I ever wrote from the App Store. Craig Glaser and I (mostly Craig) came up with the idea of creating heat maps for players in MLB. We thought it would be a cool visualization, and were convinced we could sell thousands of copies and be App Store rich. I took to writing it. I took to rewriting it. I took to rewriting what I rewrote, only to discover, hm, Iâm not a very good programmer. Objective-C being so foreign didnât help⊠But in the end, it got done. We didnât quite sell thousands, but we definitely made enough to recoup our money for the App Store fees, but not enough to recoup the hundreds of hours I had put into it. I didnât care though, I had an app in the App Store, Craig had some rep in the sabermetrics community which he is now helping push forward. Then I rewrote it again, a new version, with a new UI. This was my first lesson in redoing a project completely. What took me 2â3 months to write the first time took me 2â3 weeks this time. I had a good base, I added networking, a database instead of flat files, daily updates which taught me Python, and a new design that was more in line with what iOS apps were now doing. It was originally all standard controls, but I added gradients and textures (which iOS 7 is now banishing). Then it just sat around, sold some copies, and I didnât do anything. We were then approached by one of the bigger sabermetric sites about doing a partnership, where they would get own the branding of the app, and we would share revenue with them for the exposure. That didnât go anywhere past an initial phone call and a few emails. The app just sat around some more, for another 6 months or so, and here we are now. Batting Goggles is out of date stylistically and statistically. Thereâs no real reason to keep it up, nor the desire to. In fact, when iOS 7 comes out, itâs not only going to look bad by iOS 6 standards, but itâs going to just look plain wrong and not fit in with the OS at all. Itâs just not worth the couple bucks a week that it gives me, to tarnish my portfolio since I donât plan on updating it any time soon.
Jarring
-
Based on beta 1 of iOS 7 (so take it with a grain of salt): from skeumorphic to schizophrenic.
-
Lack of visual distinction between a button and a label is confusing so far.
-
So is how thin the fonts are, making it hard to read. I am surprised they didnât go with Avenir as their system font.
Based on beta 1 of iOS 7 (so take it with a grain of salt): from skeumorphic to schizophrenic.
Lack of visual distinction between a button and a label is confusing so far.
So is how thin the fonts are, making it hard to read. I am surprised they didnât go with Avenir as their system font.
Based on beta 1 of iOS 7 (so take it with a grain of salt): from skeumorphic to schizophrenic. Lack of visual distinction between a button and a label is confusing so far. So is how thin the fonts are, making it hard to read. I am surprised they didnât go with Avenir as their system font. I canât wait to recompile all my apps just to get the status bar to work. Last but not least, the semi-translucent apps are very strange looking. The iPhoneâs background actually affects how your app looks and feels. That said, jarring isnât necessarily bad. Iâm looking forward to seeing what designers do. I donât think standard UIKit was ever really good looking and this isn't either, but on the other end of the spectrum. Overall, as I said, hard to judge off beta 1, but I would definitely use the word jarring to describe it.
WWDC 2013 Prediction
Iâm only going to make one prediction for WWDC this year, and itâs one that probably wonât be verifiable for a bit of time. With that in mind, Iâll just leave this here.
Iâm only going to make one prediction for WWDC this year, and itâs one that probably wonât be verifiable for a bit of time. With that in mind, Iâll just leave this here. I think iOS 7 is the OS that weâll be able to look back on and say this is where it started to grow up. I think there will be multitasking changes, and inter-app communication, and settable default apps. This is in preparation for a day where we see the iPad take on bigger form factors, and be more usable. I donât think that OS X has a future where it merges with iOS, but I can see a 13" iPad on your desk. iOS is not currently fit to really do that, but with some attention it can get to the point where it does (almost?) everything that Mac can do, but simpler.
4 Years Ago
Apparently 4 years ago today I decided to try my hand at Objective-C.
Gonna work on an iPhone app, hopefully I can master the fine ways of Objective C.
â Joe Fabisevich đ¶đłâą (@mergesort) April 29, 2009
Apparently 4 years ago today I decided to try my hand at Objective-C. Gonna work on an iPhone app, hopefully I can master the fine ways of Objective C. â Joe Fabisevich đ¶đłâą (@mergesort) April 29, 2009
Aaron Swartzâs Raw Thoughts
I spent a good amount of time over the weekend reading Aaron Swartzâs blog, shortly after his untimely passing. In short, I never new him, and so I donât want to rehash everything thatâs been said, but judging by what Iâve read, he was a brilliant person. So I leave you with a few blog entries that I found particularly insightful, interesting, and helpful. Some are longer, some are shorter, but Iâd recommend giving them all a read.
I spent a good amount of time over the weekend reading Aaron Swartzâs blog, shortly after his untimely passing. In short, I never new him, and so I donât want to rehash everything thatâs been said, but judging by what Iâve read, he was a brilliant person. So I leave you with a few blog entries that I found particularly insightful, interesting, and helpful. Some are longer, some are shorter, but Iâd recommend giving them all a read. On how to be more productive On learning (and failure) On personal objectivity On the news On depression On pushing through pain The first and second are the most powerful, and the last oneâs the hardest, but each of them has itâs own merit. Hope these do something for someone.
Wunderlist is wunderful. Could be wunderfuler.
Itâs not their fault.
tldr; It sucks that iOS apps canât update in the background, but I get why Apple does this. No biggie, it would just make a lot of apps better, but at the cost of thinking about battery consumption.
P.S. You canât read 200 words, really?
Itâs not their fault. tldr; It sucks that iOS apps canât update in the background, but I get why Apple does this. No biggie, it would just make a lot of apps better, but at the cost of thinking about battery consumption. P.S. You canât read 200 words, really? This week, a product that I hold close to my heart Wunderlist had a major update. Itâs totally revamped with bugs of the past smoothed out. I want every Reminders-like app to just behave like Wunderlist. Itâs cross platform, syncs for free, is intuitive to use, and beautifully crafted. But it can never run a daemon on Apple-made devices and exist in the App Store. The fact that Apple controls iOS so tightly means that Reminders, their own app, can get updates from other devices via iCloud when youâre not in the app. It a magical (yes, magical⊠ok, seamless) experience to take a note on your iPhone, and open your Mac four days later knowing that your change will be there. In fact, OS X 10.8 introduced a feature where a sleeping computer can download updates without even opening the lid (if your Mac is plugged into an outlet). I often jot down reminders on the subway and then open my computer the next day to be told of what Iâve surely forgotten to do. It just sucks that thereâs nothing 6Wunderkinder, and other developers can do to make their product that awesome.
Is a $329 iPad mini Good Business?
I posed a question earlier today about Appleâs new iPad mini pricing. Is there anyone who would have bought the iPad at 299thatwouldnâČtpurchaseitat299 that wouldn't purchase it at 299thatwouldnâČtpurchaseitat329? My guess is no. There are reasons such as brand/price perception, the supposed $299 psychological barrier, and more that I donât want to really cover, so Iâll pose it as a simple mathematical statement.
I posed a question earlier today about Appleâs new iPad mini pricing. Is there anyone who would have bought the iPad at 299thatwouldnâČtpurchaseitat299 that wouldn't purchase it at 299thatwouldnâČtpurchaseitat329? My guess is no. There are reasons such as brand/price perception, the supposed $299 psychological barrier, and more that I donât want to really cover, so Iâll pose it as a simple mathematical statement. If less than 10% of users are willing to stand their ground and say no, this is too expensive, then Apple has made a solid play. If it is higher, Apple has done poorly. The 10% comes from the fact that for every person why buys it at 329,Appleismakinganextra329, Apple is making an extra 329,Appleismakinganextra30 in revenue. That means that every 10 they sell at 329isenoughtocoverthedifferenceonthatonepersonwhosaysâno,Iwillnotpaytheextra329 is enough to cover the difference on that one person who says âno, I will not pay the extra 329isenoughtocoverthedifferenceonthatonepersonwhosaysâno,Iwillnotpaytheextra30.â Assuming Appleâs margins are similar to their usual 30-35%, and [Iâm really assuming] that the cost is around 210â210-210â220 to produce the low end model, then they would raise their margins from the 30% or so, to 35-ish% by adding that $30 in profit. Apple wonât have to worry about making up the margins in volume. That said, Iâm not going to get one, but thatâs because outside of development I never use my iPad, not because I think itâs too expensive. I probably wouldnât have gotten one at 249or249 or 249or199 either.
An App Store Retrospective
Four years ago today, the Apple App Store launched, changing the face of software distribution. Anyone from a curious 13-year old to a grizzled developer who lived through learning Pascal can release an app and hope to strike gold. Even Appleâs own operating systems are distributed with this model now. I distinctly remember my first thoughts on the matter. I was standing in line for the iPhone 3G, not for myself, but for a coworker who I had convinced to upgrade (so I could get his original iPhone). Imagine, the internet in my pocket, anywhere I went. The future had arrived for me. Four years later, this has become commonplace. But I do remember being asked on that line, âso what do you think about this app store?â by a curious line stander. I didnât have much background actually developing software yet, so I heartily told him âEh, you can already download apps from Installer, who cares where you get it from?â (Sidenote: Remember Installer?)
Well, as it turns out, it was kinda a big deal.
Four years ago today, the Apple App Store launched, changing the face of software distribution. Anyone from a curious 13-year old to a grizzled developer who lived through learning Pascal can release an app and hope to strike gold. Even Appleâs own operating systems are distributed with this model now. I distinctly remember my first thoughts on the matter. I was standing in line for the iPhone 3G, not for myself, but for a coworker who I had convinced to upgrade (so I could get his original iPhone). Imagine, the internet in my pocket, anywhere I went. The future had arrived for me. Four years later, this has become commonplace. But I do remember being asked on that line, âso what do you think about this app store?â by a curious line stander. I didnât have much background actually developing software yet, so I heartily told him âEh, you can already download apps from Installer, who cares where you get it from?â (Sidenote: Remember Installer?) Well, as it turns out, it was kinda a big deal. Kinda, sorta, the biggest deal relating to the economics of software in years. It singlehandedly took away the insane console-model, made purchasing software seamless, created the casual gaming genre, and gave hope to thousands of indie developers. I wonât go into too much detail, but companies like EA now focus a great deal of effort on these tiny computers that pose as phones. I also learned so much by tinkering away, writing bad code for the iPhone, and eventually publishing my own apps. So last but not least, you can thank Apple for making a platform compelling enough to inspire young minds to play around with software development, and not sit in the doldrums making software for a faceless financial institution. I canât imagine where I would be if I had to work for a bank. Instead, I play with an iPhone or Android phone all day. I'm constantly intrigued to find out what the newest goodies jQuery has to offer (nothing), if anything has come about in HTML5 (eventually the standards body will get to it), or just ponder what the next five years of technology will bring us.
Back from the Mac
iOS 6 ends up bringing a lot of interesting new features to the regular user, and looking over the API differences, not a lot on the developer side of things. iOS 5 was a gigantic leap for developers, starting with ARC, Storyboards, and a bajillion1 APIs opened up. Iâm willing to bet that this is becoming Appleâs calling card. One on, one off, is now to be prevalent in designing both hardware and software.
A pattern is emerging that makes it seem pretty likely:
- iPhone 3G, 3GS.
- iPhone 4, iPhone 4S.
- Leopard, Snow Leopard.
- Lion, Mountain Lion.
- iOS 5, iOS 6.
iOS 6 ends up bringing a lot of interesting new features to the regular user, and looking over the API differences, not a lot on the developer side of things. iOS 5 was a gigantic leap for developers, starting with ARC, Storyboards, and a bajillion1 APIs opened up. Iâm willing to bet that this is becoming Appleâs calling card. One on, one off, is now to be prevalent in designing both hardware and software. A pattern is emerging that makes it seem pretty likely: iPhone 3G, 3GS. iPhone 4, iPhone 4S. Leopard, Snow Leopard. Lion, Mountain Lion. iOS 5, iOS 6. Every cool new Mac with features ahead of its time (hello retina display), itâs subsequent spec bump releases. First you make something new, innovative, bold. Then you take the time to add fixes, polish and finish. Figure out how to make something awesome, get the margins down with your supply chain and economies of scale. Thatâs what Apple have defined as their new strategy. Release and refine. Thatâs a technical term for all you non-tech savvy folk.â©â©
Create a Catchy Phone Number for $3 With Google Voice
When I was a kid, I used to try and make words and phrases out of my phone number, because before the internet we had nothing better to do. These days, if you want a custom phone number, there is a way to do it for only $3. So, hereâs a quick little life hack for you.
When you sign up for Google Voice, they offer you the ability to pick your own phone number, from the pool of numbers that they own. I used Google Voice for a couple of years, before ultimately deciding to port my number to my carrier. The process to port your number only costs $3, a phone call to your carrier, and a few hours of waiting for it to go through (in which you can continue to use your phone).
When I was a kid, I used to try and make words and phrases out of my phone number, because before the internet we had nothing better to do. These days, if you want a custom phone number, there is a way to do it for only $3. So, hereâs a quick little life hack for you. When you sign up for Google Voice, they offer you the ability to pick your own phone number, from the pool of numbers that they own. I used Google Voice for a couple of years, before ultimately deciding to port my number to my carrier. The process to port your number only costs $3, a phone call to your carrier, and a few hours of waiting for it to go through (in which you can continue to use your phone). Head over to https://voice.google.com/unlock. Sign in and click that you agree to pay $3, and that your number will be unlocked. Call your carrier (some even provide online options), ask them to port your number. If they ask for an account number, it is the same as your Google Voice number. Your carrier will give you instructions on what to do when your request comes through. On Verizon, that process is just replying to a text message. Wait. Enjoy your new customized phone number.
WWDC 2012 Predictions
I figure, might as well get these on the record, and afterwards we can all laugh at how wrong I am.
The MacItâs dead. Just kidding, itâs being revitalized.
I figure, might as well get these on the record, and afterwards we can all laugh at how wrong I am. The Mac Itâs dead. Just kidding, itâs being revitalized. As for all the retina rumors, I suspect weâre going to go back to three lines of Mac notebooks. A Macbook Air, the entry level machine that is for every student looking to head off for their first semester of college. The Macbook (making itâs triumphant return), the middle of the line notebook, the one that has a spinning platter disk, but an oh-so nice retina display. And the Macbook Pro, for all those old fogies that need to have their Firewire and do some real hardcode video editing on the go. The iMac will receive an incremental update. I find it hard to believe that theyâre going to be able to bring a retina-capable display to a 27â screen, but Apple have surprised me before, so I wouldnât rule it out. To me, this seems like something they can phase in over time, when they get margins down on the process through practice with other Macs. The Mac Pro, should be getting new hardware, since it hasnât been updated since Barack Obama was successfully convincing us about change you can believe in. Nothing surprising here, just the latest series of Xeons thrown in there, with upgradability to SSD. iOS (In order from most likely to happen, to least likely) Native Facebook integration. Facebook is everywhere, heck, itâs even over the hill to lots, so it really ought to be a part of iOS by now. Youâll see that, posting photos and status updates much like Twitter in iOS 6. New Maps app and APIs. This has been everywhere. Apple and Google are on the rocks, this is the logical move. Iâd expect backwrads compatability with current APIs, but anything new ought to be tempting to anyone who deals with the CoreLocation and MapKit frameworks on a regular basis. A personal wallet-type app, which lets you keep credit cards, coupons, etc. In the fall, an NFC enabled iPhone 5 to supplement this app. Much like when Reminders was announced for iOS 5 and everyone said, uh, âthatâs niceâ, until Siri was announced. More iPad-specific UI. Tiny notification center banners that you have to squint to see on even the retinaiest of iPads just shouldnât cut it. iOS users arenât stupid enough (I think?) that they canât make the connection between some changes between the iPad and iPhone. Better notification center. Easier to close out things, and a swipe to delete functionality for individual notifications. A new animation for notifications coming in, which doesnât just oddly flip over. Siri comes out of beta in iOS 6. So Siri that makes a beta of the update which brings it out of beta. Confusing enough to make me stop writing more on this matter. A step back from skeumorphism. The leather stitching on Notes is cool, thatâs fine, but why do I need rich corinthian leather to help me find my friends? No file system, but the ability to share information between applications. A way for a developer to tick a checkmark that says âlet x file-type be used globallyâ. So a camera app would say âI can make a file that is of type .png, let any other application that says they can handle .png use these files.â This is what I would be most excited about in iOS 6 as a developer, and a user. You could establish a workflow this way, taking the iPad from one stupid designation of âconsumption deviceâ to another stupid one of 'creation deviceâ. Backing away from modality. Quick reply to text messages, and other type of functionality that does not necessarily just bring you into an app. Just a dark-horse guess, Pull to Refresh becomes an Apple UI standard. Other Apple products Apple TV SDK? Iâm very hesitant on this one, Iâd have to say no for now. To me it just doesnât feel like the time is right, but Iâd love to be wrong and see what developers can do when given a 42â screen to play with. Just imagine, scam Pokemon apps that crash right away when you open your TV, in 1080P glory! Expect iCloud to get Reminders and Notes added to itâs list of apps, which should be no surprise since theyâve already started testing that out. Some accessory updates, like a Gigabit Wireless N Airport Extreme.
A Letter
Below is a letter that I wrote.
L
Below is a letter that I wrote. L
Thoughts on Appleâs Messages
The idea of a continuous client is something that has sent nerdsâ hearts a flutter for a long time. No matter what device you pick up, youâll have an up to date conversation log.
Now that Apple has released a beta of their Messages (formerly iChat) app, iMessage is more accessible than ever. (iMessage is the protocol, Messages is the app.) Iâm toying with the idea of dropping AIM for straight iMessage. If you get an iMessage on your phone, it comes to your Mac, and vice versa. You can now reply to your friends without picking up your phone and poking a 3.5" glass screen.
A new problem that arises when using iMessage.
The idea of a continuous client is something that has sent nerdsâ hearts a flutter for a long time. No matter what device you pick up, youâll have an up to date conversation log. Now that Apple has released a beta of their Messages (formerly iChat) app, iMessage is more accessible than ever. (iMessage is the protocol, Messages is the app.) Iâm toying with the idea of dropping AIM for straight iMessage. If you get an iMessage on your phone, it comes to your Mac, and vice versa. You can now reply to your friends without picking up your phone and poking a 3.5" glass screen. A new problem that arises when using iMessage. If you get an IM, youâre now going to have it ring through on every device. If youâre getting 3 IMâs a minute, thatâs 6 buzzes if you have an iPhone and iPad. That doesnât even include the 3 dings that Messages will give you to alert you. And my battery life, oh I can only imagine the hit thatâs going to take when I keep getting push messages left and right. The answer is that the various messages apps need to have an awareness of your state. There can be automated ways of doing this, but that might create a lot of false positives. Say you pick up your phone to play a game and Apple redirects your messages there, since that is the most recent device, but you donât want that. Or if youâre on your Mac, but donât want to be receiving messages there, same issue, different device. To solve this, I would propose that Apple adds a few little tweaks to the Messages app. First of all, add a little button that mutes all other devices alerts, though they will continue to receive the actual updated chat log. (Iâm no Photoshop whiz obviously). The second is an auto-away period. If you have not touched your computer for n minutes, direct your chats to your secondary device. Itâs more so an addition to the an idle status than a whole new feature. When you return to your computer Messages pings the devices to say âIâm back at the computer, stop redirecting my message notificationsâ if your mute notifications toggle is on. If they can add those two things, I see no reason why it wouldnât be far more convenient to use iMessage than AIM, not worrying about logging myself in and out of devices.
Security Chase
Go to your Chase account and enter your password. Now log out, and enter your password with a different pattern of capitalization. So if your password was Password
, now try pASSWORD
. I bet you it worked and Chase still let you into your account.
Iâll preface the coming diatribe with a statement about my expertise. I am not a security researcher and would never call myself an expert in the field of cryptography. Iâm just a software developer who likes to poke around in security matters in as amateur a way as possible. The material covered here is a basic explanation, and there are many more factors in play. Feel free to contact me if you have more that youâd like to discuss.
Go to your Chase account and enter your password. Now log out, and enter your password with a different pattern of capitalization. So if your password was Password, now try pASSWORD. I bet you it worked and Chase still let you into your account. Iâll preface the coming diatribe with a statement about my expertise. I am not a security researcher and would never call myself an expert in the field of cryptography. Iâm just a software developer who likes to poke around in security matters in as amateur a way as possible. The material covered here is a basic explanation, and there are many more factors in play. Feel free to contact me if you have more that youâd like to discuss. So, whatâs the big deal? Security is important. It should be bankâs top priority. Why does capitalization matter? This lowers a hackerâs barrier to entry into your account by a factor of 26. How does this work? Itâs simple enough. The total number of characters that you can enter will be called the alphabet. If you only allow lowercase letters, your total alphabet size is 26. (All the letters from a-z). If you have lowercase and uppercase (A-Z), the size now doubles to 52. If you add in letters (0â9), now it is 62. If you add in symbols (such as ?,!), your alphabet is now up to 95 characters, because there are 33 symbols on a standard keyboard. Chase forbids you from using special symbols when creating a password, so youâre starting off with a maximum alphabet of 62 characters. We showed above that they are also not distinguishing between lowercase and capital letters, which lowers it again by 26 (since a is the same as A). That leaves us with a total of 36 characters to choose from to make a password. If you had the password abcdefghij (please donât be this stupid) your password length is 10. You are only allowed to use 36 characters then the total number of possibilities is 36^10 total passwords. You can see this by splitting up the password. There are 36 options for the 1st character, 36 options for the 2nd, 36 for the 3rd and so on. If you were allowed to have an alphabet of 95 characters it becomes 95 options for the 1st, 95 for the 2nd, etc. How much safer is this? Weâll use the password abcdefghij for this mind experiment, and a set of computers that are making 100 billion guesses per second. If you had an alphabet of only lowercase numbers, it would take 24 minutes to crack that password through brute force. If you have an alphabet of lowercase and numbers, (Chaseâs situation), the number jumps up to 10.45 hours. While this is a nice improvement, it is nothing that a little more CPU power canât make into a problem. This really isnât going to keep you safe for very long. If you have an alphabet where lowercase and uppercase numbers are different, along with numbers, the time to brute force jumps all the way up to 3.25 months. This is a vast improvement, but still is not something that a little horse power from a hacker canât fix. An alphabet of lowercase, uppercase, numbers and symbols bumps that time up to 19.25 years. This is your gold standard. You should be changing your passwords more often than this as it is, and ideally passwords longer than 10 characters. Most people donât make their passwords complex or long, because theyâre harder to remember. The price you pay with this approach is lack of security. Thatâs a price that you should not have to pay when dealing with your bank. It might cost you so much more than just your piece of mind. You can find out more information at Steve Gibsonâs Password Haystacks Page, and all the calculations are based on his search space calculator.
Teachers Need To Teach
Educators have a goal, to teach children. These days in America our goal seems to be concerned with how to trick, or worse, force a kid into learning. Instead what they really should be focusing on is how to relate the information to the child. That is how you get a child excited to learn.
Educators have a goal, to teach children. These days in America our goal seems to be concerned with how to trick, or worse, force a kid into learning. Instead what they really should be focusing on is how to relate the information to the child. That is how you get a child excited to learn. The Children Must Play.
Why My App Sucked And Why I Wonât Make The Same Mistakes Again
When youâre learning, itâs important to make mistakes, thatâs how you learn. When you spend 10 hours looking for what is wrong, and it turns out you wrote if(x=1)
when you meant if(x==1)
, I can guarantee you the next time you look for why a piece of code is magically broken, that will be the first thing youâll check. If you do this enough times, youâll find yourself fixing stupid mistakes before you even make them. Thatâs how you get good.
When youâre learning, itâs important to make mistakes, thatâs how you learn. When you spend 10 hours looking for what is wrong, and it turns out you wrote if(x=1) when you meant if(x==1), I can guarantee you the next time you look for why a piece of code is magically broken, that will be the first thing youâll check. If you do this enough times, youâll find yourself fixing stupid mistakes before you even make them. Thatâs how you get good. Preface The most important thing for me over the last 2 years of learning iOS development has been getting things wrong. This piece is about everything I got wrong in developing my first app, Batting Goggles. It's also about how fixed those mistakes. Iâm no self-help guru, this is just my experience with mistake making. Only now do I even feel confident that I can go into making an app without screwing up the things Iâve messed up in the past. I can do this because I am very wary of repeating these mistakes. So without further ado, I present to you Batting Goggles by the numbers. Version 1.0 I finished Batting Goggles and I felt an immediate sense of accomplishment. It was the first thing I had ever truly created from scratch. The idea was Craigâs (my partner in app development), the execution was all mine. At first we werenât on the same page of how to make what we wanted to make, but once we came up with the interface, it was smooth sailing. The app was very bare, but did everything that we had wanted it to do. You could find a player and pull up his heat map and find his stats for the last season. Then Craig asked me to add a feature. He wanted to be able to set Favorites, and a lineup for each of the current teams playing. This would make it easier to navigate through the app, instead of having to go through thousands of players every time you wanted to look for someone. (There was a search, but the point still holds). Version 1.1 And with a lot of effort, I added a tab bar to the bottom of the screen with items for favorites and lineups. The reason it was so tough was because I had 1 view controller and a lot of booleans to check on the current status of the app. This was not a good idea and will never be one. Anyone who does this, please heed these words⊠donât ever do anything that stupid in any aspect of your life. The amount of code that went into it was tremendous, but when it was done, I decided I would take a nice break from working on the app. Then the bugfixes came. Versions 1.2-1.3 The break didnât really happen, but eventually I got the bugs fixed up. Apple had just released the iPhone 3G[S] (remember the box around the [S]?). They tested it on that the next time I wanted to submit, and for some reason it didnât work. My guess is they actually hadnât been doing a great job of testing it before, because the bugs that were added were pretty damn noticeable when you used the app when you switched between screens quickly. I had not been testing it like that, just how I would use it, so I didnât notice it being broken. The lesson I learned: Test for even the oddest and most accidental ways to use an app. There are infinite ways to do things, and your way is only right to you. Version 1.5 This was the last release for the 2010 season. We got a lot of bugs out of the way, and nothing too major feature-wise, so we called it a day. People were still downloading, but some were complaining that the stats in the app were out of date. This was true, we had no way of updating the numbers in the app, so they were based off the prior season which had ended 365 days earlier. At this point I resolved to add a lot of features to version 2.0 and turn those frowns upside down. It was November, and the new season started in April, so we had hoped to get it out in early March right in time for the baseball season. As it turns out, I wasnât so good at estimating. The Offseason Craig and I talked over the features we wanted, and we decided that the first thing to tackle would be live stats. We wanted to make sure that people couldnât complain about that anymore, and we figured out a pretty simple solution involving some Python scripts and Dropbox. We also wanted to expand the data all the way back to when MLB started keeping track of this info, so now we would have 3 years of data instead of 1. Then I found an app that I liked, and it had a baseball card view, so I added that to the app. I wanted to make Goggles multi-dimensional, instead of just providing heat maps. With this, youâd be able to just use our app as your go to for your daily baseball stats questions. I then decided to add on the side of the list, markers denoting what letter of the alphabet the personâs last name was, so you could navigate far quicker through the player list, much like you see in the Contacts app. From using Goggles I realized that clearing and editing the lists users made was a pain, so I was going to make that experience more in line with Appleâs standard apps. Version 2.0 As I was getting close to finishing version 2.0, I realized that I had done everything that involved the iPhone wrong. I didnât understand Model-View-Controller. I used mismatching bracket/dot syntax which made it incredibly confusing to read. Iâm pretty sure that some of the code I had written just worked by sheer witchcraft and backdoor deals with the devil. When I couldnât figure out how to integrate some of the new features I was hoping to add, I realized it was time to tear it all down and rewrite it. I rewrote Batting Goggles over the course of a week. It didnât take too long because the thought behind the original code was right, just often redundant and convoluted. I was able to bring in all the features that I had hoped to, and felt a lot better about my app. In all likelihood no one else even cared or noticed. When I released it I felt that same personal accomplishment, because really, I had made 2 apps, a broken one, and a better one. It is still not perfect, but it is actually decipherable now, and much more flexible if/when I want to add features or fix things. Conclusion I made lots and lots of mistakes, but itâs ok because it really was my first big project. One mistake that I failed to mention was that when I released version 2.0, it was August, and the baseball season was almost over. I felt bad for a bit to the customers who paid for it and were essentially beta testers for a year and change, and had no use for the app for almost the entire 2011 season. All that personal accountability has made me far more detail oriented, and anal-retentive about releasing sloppy work. Since then I have released 3 apps, all of which I can proudly point to. I have another one coming out very soon, hopefully itâll be more of a homerun than Batting Goggles.
Prioritize Priorities
As Merlin Mann would say: if you have 27 priorities because then you don't have any priorities. With schoolwork and work-work catching up to me I have very little time for my own coding. It's important for me to make sure I get done what I need to get done. Midterms, homework, deadlines and life have been making it difficult of late for me to sit down and churn out a bunch of code for myself, even though the things I want to do are quick and simple. When I do have some spare time it hurts to even think about coding.
As Merlin Mann would say: if you have 27 priorities because then you don't have any priorities. With schoolwork and work-work catching up to me I have very little time for my own coding. It's important for me to make sure I get done what I need to get done. Midterms, homework, deadlines and life have been making it difficult of late for me to sit down and churn out a bunch of code for myself, even though the things I want to do are quick and simple. When I do have some spare time it hurts to even think about coding. Right now my number one priority school finishing this semester and graduating is obviously the most important thing. Then comes work. We're very close to releasing the newest version of our browser and I want to make sure that that goes off without a hitch. Last but not least is my own code and whatever free time I have left. Hopefully the sacrifices I make pay off in the long run. Sadly this also means that Unmentionables is going to be pushed back a little bit. It will probably be feature complete within the next couple of weeks, and then we can start rigorously testing it. Here's to priorities.
Itâs about the AI Say I
Apple may not have formally said it, but they did have a âone more thing yesterday.â They pulled Siri from up their sleeves, which many technology pundits had predicted (kudos to 9to5mac.com on the original scoop). What is surprising to me is that a lot of people appear to be under the impression that Siri is a parlor trick that you can use to show off your phone. This is not voice control, this is not FaceTime, this isnât even the Compass app. This is something you will use every day and come to rely on.
Siri is real; it is a revolution though it may seem gimmicky until itâs perfected.
Apple may not have formally said it, but they did have a âone more thing yesterday.â They pulled Siri from up their sleeves, which many technology pundits had predicted (kudos to 9to5mac.com on the original scoop). What is surprising to me is that a lot of people appear to be under the impression that Siri is a parlor trick that you can use to show off your phone. This is not voice control, this is not FaceTime, this isnât even the Compass app. This is something you will use every day and come to rely on. Siri is real; it is a revolution though it may seem gimmicky until itâs perfected. Apple has slapped a beta label on it, which it does not do all willy nilly (you probably know who Iâm hinting at). Siri is light years beyond voice control, itâs natural language processing is proof of that. Siri is smart, it can respect your associations, you tell Siri that someone is your mother, it understands. Thatâs the part of Siri that makes it special, itâs there to help you. If you tell it to change your meeting at 3pm when you donât have one, it understands that you made a mistake, and instead of telling you, I have no knowledge of a meeting at 3; it will look at your schedule and say, are you sure you donât mean the one at 2pm, or 4pm? Thatâs just dipping your toe into the pool when you think about what it can do, and when it learns even more and gets even smarter, it wonât just be scheduling a meeting for you. It will be able to find out where you want to go on vacation, book your hotel and plane tickets based on what airline you like to fly and which hotel you find pleasant. After itâs done all that work, it will suggest for you an itinerary based on what your interests are, knowing whether your favorite band is in town, or if you really just want to go bar hopping. This is the step that Apple is taking to become more than a hardware company. The iPhone is one fun toy. You can play games, do real work, and can even create content on it. But how long will that gravy train ride last? iCloud and Siri bring Apple to the next step of computing, where we enter a more device agnostic environment. Itâs easy to copy a phoneâs form factor or specs, but very few companies have the wherewithal to bring together the cohesive vision that Apple is pushing. Consumers in 2 years will want to get their phone because of what it can do for them, not because your wife will love the Tegra 2 chipset. Siri is a step towards the world of the Jetsons becoming a reality. Iâm just giddy to see technology heading in this direction and am really looking forward to Skynet⊠I mean Siri. Of note, Steve Jobs passed away while I was writing this post. To quote @gabrielroth âThe corporation is the most powerful tool weâve ever invented. Itâs typically used as a club or a lever. Steve Jobs used it as a paintbrush.â This is all part of the grander vision that Steve Jobs saw, and hopefully some day we can all live it.

Recent content in Articles on Smashing Magazine â For Web Designers And Developers
Smashing Animations Part 5: Building Adaptive SVGs With `<symbol>`, `<use>`, And CSS Media Queries
SVGs, they scale, yes, but how else can you make them adapt even better to several screen sizes? Web design pioneer Andy Clarke explains how he builds what he calls âadaptive SVGsâ using ``, ``, and CSS Media Queries.
Iâve written quite a lot recently about how I prepare and optimise SVG code to use as static graphics or in animations. I love working with SVG, but thereâs always been something about them that bugs me. To illustrate how I build adaptive SVGs, Iâve selected an episode of The Quick Draw McGraw Show called âBow Wow Bandit,â first broadcast in 1959. In it, Quick Draw McGraw enlists his bloodhound Snuffles to rescue his sidekick Baba Looey. Like most Hanna-Barbera title cards of the period, the artwork was made by Lawrence (Art) Goble. Letâs say Iâve designed an SVG scene like that one thatâs based on Bow Wow Bandit, which has a 16:9 aspect ratio with a viewBox size of 1920Ă1080. This SVG scales up and down (the clueâs in the name), so it looks sharp when itâs gigantic and when itâs minute. But on small screens, the 16:9 aspect ratio (live demo) might not be the best format, and the image loses its impact. Sometimes, a portrait orientation, like 3:4, would suit the screen size better. But, herein lies the problem, as itâs not easy to reposition internal elements for different screen sizes using just viewBox. Thatâs because in SVG, internal element positions are locked to the coordinate system from the original viewBox, so you canât easily change their layout between, say, desktop and mobile. This is a problem because animations and interactivity often rely on element positions, which break when the viewBox changes. My challenge was to serve a 1080Ă1440 version of Bow Wow Bandit to smaller screens and a different one to larger ones. I wanted the position and size of internal elements â like Quick Draw McGraw and his dawg Snuffles â to change to best fit these two layouts. To solve this, I experimented with several alternatives. Note: Why are we not just using the <picture> with external SVGs? The <picture> element is brilliant for responsive images, but it only works with raster formats (like JPEG or WebP) and external SVG files treated as images. That means that you canât animate or style internal elements using CSS. Showing And Hiding SVG The most obvious choice was to include two different SVGs in my markup, one for small screens, the other for larger ones, then show or hide them using CSS and Media Queries: <svg id="svg-small" viewBox="0 0 1080 1440"> <!-- ... --> </svg> <svg id="svg-large" viewBox="0 0 1920 1080"> <!--... --> </svg> #svg-small { display: block; } #svg-large { display: none; } @media (min-width: 64rem) { #svg-small { display: none; } #svg-mobile { display: block; } } But using this method, both SVG versions are loaded, which, when the graphics are complex, means downloading lots and lots and lots of unnecessary code. Replacing SVGs Using JavaScript I thought about using JavaScript to swap in the larger SVG at a specified breakpoint: if (window.matchMedia('(min-width: 64rem)').matches) { svgContainer.innerHTML = desktopSVG; } else { svgContainer.innerHTML = mobileSVG; } Leaving aside the fact that JavaScript would now be critical to how the design is displayed, both SVGs would usually be loaded anyway, which adds DOM complexity and unnecessary weight. Plus, maintenance becomes a problem as there are now two versions of the artwork to maintain, doubling the time it would take to update something as small as the shape of Quick Drawâs tail. The Solution: One SVG Symbol Library And Multiple Uses Remember, my goal is to: Serve one version of Bow Wow Bandit to smaller screens, Serve a different version to larger screens, Define my artwork just once (DRY), and Be able to resize and reposition elements. I donât read about it enough, but the <symbol> element lets you define reusable SVG elements that can be hidden and reused to improve maintainability and reduce code bloat. Theyâre like components for SVG: create once and use wherever you need them: <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"> <symbol id="quick-draw-body" viewBox="0 0 620 700"> <g class="quick-draw-body">[âŠ]</g> </symbol> <!-- ... --> </svg> <use href="#quick-draw-body" /> A <symbol> is like storing a character in a library. I can reference it as many times as I need, to keep my code consistent and lightweight. Using <use> elements, I can insert the same symbol multiple times, at different positions or sizes, and even in different SVGs. Each <symbol> must have its own viewBox, which defines its internal coordinate system. That means paying special attention to how SVG elements are exported from apps like Sketch. Exporting For Individual Viewboxes I wrote before about how I export elements in layers to make working with them easier. That process is a little different when creating symbols. Ordinarily, I would export all my elements using the same viewBoxsize. But when Iâm creating a symbol, I need it to have its own specific viewBox. So I export each element as an individually sized SVG, which gives me the dimensions I need to convert its content into a symbol. Letâs take the SVG of Quick Draw McGrawâs hat, which has a viewBox size of 294Ă182: <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 294 182"> <!-- ... --> </svg> I swap the SVG tags for <symbol> and add its artwork to my SVG library: <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"> <symbol id="quick-draw-hat" viewBox="0 0 294 182"> <g class="quick-draw-hat">[âŠ]</g> </symbol> </svg> Then, I repeat the process for all the remaining elements in my artwork. Now, if I ever need to update any of my symbols, the changes will be automatically applied to every instance itâs used. Using A <symbol> In Multiple SVGs I wanted my elements to appear in both versions of Bow Wow Bandit, one arrangement for smaller screens and an alternative arrangement for larger ones. So, I create both SVGs: <svg class="svg-small" viewBox="0 0 1080 1440"> <!-- ... --> </svg> <svg class="svg-large" viewBox="0 0 1920 1080"> <!-- ... --> </svg> âŠand insert links to my symbols in both: <svg class="svg-small" viewBox="0 0 1080 1440"> <use href="#quick-draw-hat" /> </svg> <svg class="svg-large" viewBox="0 0 1920 1080"> <use href="#quick-draw-hat" /> </svg> Positioning Symbols Once Iâve placed symbols into my layout using <use>, my next step is to position them, which is especially important if I want alternative layouts for different screen sizes. Symbols behave like <g> groups, so I can scale and move them using attributes like width, height, and transform: <svg class="svg-small" viewBox="0 0 1080 1440"> <use href="#quick-draw-hat" width="294" height="182" transform="translate(-30,610)"/> </svg> <svg class="svg-large" viewBox="0 0 1920 1080"> <use href="#quick-draw-hat" width="294" height="182" transform="translate(350,270)"/> </svg> I can place each <use> element independently using transform. This is powerful because rather than repositioning elements inside my SVGs, I move the <use> references. My internal layout stays clean, and the file size remains small because Iâm not duplicating artwork. A browser only loads it once, which reduces bandwidth and speeds up page rendering. And because Iâm always referencing the same symbol, their appearance stays consistent, whatever the screen size. Animating <use> Elements Hereâs where things got tricky. I wanted to animate parts of my characters â like Quick Drawâs hat tilting and his legs kicking. But when I added CSS animations targeting internal elements inside a <symbol>, nothing happened. Tip: You can animate the <use> element itself, but not elements inside the <symbol>. If you want individual parts to move, make them their own symbols and animate each <use>. Turns out, you canât style or animate a <symbol>, because <use> creates shadow DOM clones that arenât easily targetable. So, I had to get sneaky. Inside each <symbol> in my library SVG, I added a <g> element around the part I wanted to animate: <symbol id="quick-draw-hat" viewBox="0 0 294 182"> <g class="quick-draw-hat"> <!-- ... --> </g> </symbol> âŠand animated it using an attribute substring selector, targeting the href attribute of the use element: use[href="#quick-draw-hat"] { animation-delay: 0.5s; animation-direction: alternate; animation-duration: 1s; animation-iteration-count: infinite; animation-name: hat-rock; animation-timing-function: ease-in-out; transform-origin: center bottom; } @keyframes hat-rock { from { transform: rotate(-2deg); } to { transform: rotate(2deg); } } Media Queries For Display Control Once Iâve created my two visible SVGs â one for small screens and one for larger ones â the final step is deciding which version to show at which screen size. I use CSS Media Queries to hide one SVG and show the other. I start by showing the small-screen SVG by default: .svg-small { display: block; } .svg-large { display: none; } Then I use a min-width media query to switch to the large-screen SVG at 64rem and above: @media (min-width: 64rem) { .svg-small { display: none; } .svg-large { display: block; } } This ensures thereâs only ever one SVG visible at a time, keeping my layout simple and the DOM free from unnecessary clutter. And because both visible SVGs reference the same hidden <symbol> library, the browser only downloads the artwork once, regardless of how many <use> elements appear across the two layouts. Wrapping Up By combining <symbol>, <use>, CSS Media Queries, and specific transforms, I can build adaptive SVGs that reposition their elements without duplicating content, loading extra assets, or relying on JavaScript. I need to define each graphic only once in a hidden symbol library. Then I can reuse those graphics, as needed, inside several visible SVGs. With CSS doing the layout switching, the result is fast and flexible. Itâs a reminder that some of the most powerful techniques on the web donât need big frameworks or complex tooling â just a bit of SVG know-how and a clever use of the basics.
Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)
Ready to move beyond static mockups? Here is a practical, step-by-step guide to Intent Prototyping â a disciplined method that uses AI to turn your design intent (UI sketches, conceptual models, and user flows) directly into a live prototype, making it your primary canvas for ideation.
In Part 1 of this series, we explored the âlopsided horseâ problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains: How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint. The answer is a more disciplined process I call Intent Prototyping (kudos to Marco Kotrotsos, who coined Intent-Oriented Programming). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designerâs explicit intent at the very center of the process. It receives a holistic expression of intent (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype. This method solves the concerns weâve discussed in Part 1 in the best way possible: Unlike static mockups, the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the systemâs underlying logic as well as its surface. Unlike a vibe-coded prototype, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesnât need to reverse-engineer a black box or become âcode archaeologistsâ to guess at the designerâs vision, as they receive not only a live prototype but also a clearly documented design intent behind it. This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the systemâs most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing. My Workflow To illustrate this process in action, letâs walk through a case study. Itâs the very same example Iâve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this GitHub repository. Step 1: Expressing An Intent Imagine weâve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out: In this example, I used Excalidraw, but the tool doesnât really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal. What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following: Navigational paths (clicking here takes you to). Interaction details that canât be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior). What parts might make sense to build as reusable components. Which components from the design system (Iâm using Ant Design Library) should be used. Any other comments that help understand how this thing should work (while sketches illustrate how it should look). Having added all those details, we end up with such an annotated sketch: As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our intent will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch. For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into whatâs happening: Note: All the prompts that I use here and below can be found in the Appendices. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are. As a result, Gemini gives us a description and the following diagram: The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. Thatâs why I consider the Conceptual Model to be an essential part of expressing intent, along with the Flow and Visualization. As a result of this step, our intent is fully expressed in two files: Sketch.png and Model.md. This will be our durable source of truth. Step 2: Preparing A Spec And A Plan The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it. I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see Appendices 2 and 3). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see Appendices 8, 9, and 10). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article. As a result, Gemini provides us with content for DAL.md and UI.md. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You donât need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you donât have such skills, donât get discouraged. The good news is that if you donât understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe youâve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better. Itâs important to remember that by their nature, LLMs are not deterministic and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you donât have to be an expert to notice that the âDeleteâ button, which is in the upper right corner of the sketch, is not mentioned in the spec. Donât get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems youâve spotted, and everything will be fixed. Once we have Sketch.png, Model.md, DAL.md, UI.md, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly. One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in Appendices 4 and 5. Step 3: Executing The Plan To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. Iâm using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek. However, at this step, this wouldnât be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI canât do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence. My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I donât think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since Iâm sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool. Before tasking the agent, we need to create a basic template for our React application. I wonât go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite. Then we put all our files into that project: Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type âgeminiâ: And we send the prompt to build the Data Access Layer (see Appendix 6). That prompt implies step-by-step execution, so upon completion of each step, I send the following: Thank you! Now, please move to the next task. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. After each task, stop so that I can test it. Donât move to the next task before I tell you to do so. As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this: It doesnât look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI. And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see Appendix 7). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the âManual Testing Planâ from UI-plan.md. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel: Once Iâm happy with the result of a step, I ask Gemini to move on: Thank you! Now, please move to the next task. Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. After each task, stop so that I can test it. Donât move to the next task before I tell you to do so. Before long, the result looks like this, and in every detail it works exactly as we intended: The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning. Step 4: Learning And Iterating Itâs time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not. And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps Iâve just walked you through. Is This Workflow Too Heavy? This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesnât really facilitate creativity. But before jumping to that conclusion, consider the following: In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it. Individual iterations donât need to be big. You can start with a Walking Skeleton: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations. And last but not least, maybe the idea of âthink before you doâ is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road. Intent Prototyping Vs. Other Methods There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate. Method/Tool Goal Risks it is best suited to mitigate Examples Why Intent Prototyping To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows. Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring. A CRM (Customer Relationship Management system). A Resource Management Tool. A No-Code Integration Platform (adminâs UI). It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff. Vibe Coding (Conversational) To rapidly explore interactive ideas through improvisation. Losing momentum because of analysis paralysis. An interactive data table with live sorting/filtering. A novel navigation concept. A proof-of-concept for a single, complex component. It has the smallest loop between an idea conveyed in natural language and an interactive outcome. Axure To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works. Designing flows that break when users donât follow the âhappy path.â A multi-step e-commerce checkout. A software configuration wizard. A dynamic form with dependent fields. Itâs made to create complex if-then logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code. Figma To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture. Making a product that looks bad, doesn't fit with the brand, or has a layout that is hard to understand. A marketing landing page. A user onboarding flow. Presenting a new visual identity. It excels at high-fidelity visual design and provides simple, fast tools for linking static screens. ProtoPie, Framer To make high-fidelity micro-interactions feel just right. Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions. A custom pull-to-refresh animation. A fluid drag-and-drop interface. An animated chart or data visualization. These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use. Low-code / No-code Tools (e.g., Bubble, Retool) To create a working, data-driven app as quickly as possible. The application will never be built because traditional development is too expensive. An internal inventory tracker. A customer support dashboard. A simple directory website. They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs. The key takeaway is that each method is a specialized tool for mitigating a specific type of risk. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model. Bringing It All Together The era of the âlopsided horseâ design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. Itâs not just a shortcut to building faster â itâs a fundamental shift in how we design. By putting a clear, unambiguous intent at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system. There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the systemâs actual logic with users from day one. Second, the very act of documenting the design intent makes us clear about our ideas, ensuring that we fully understand the systemâs underlying logic. Finally, this documented intent becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designerâs vision from a black box. Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating pictures of a product and empowers us to become architects of blueprints for a system. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought. Appendices You can find the full Intent Prototyping Starter Kit, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this GitHub repository. Appendix 1: Sketch to UML Class Diagram + You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the sketch carefully. There should be no ambiguity about what we are building. **Step 2:** Generate the conceptual model description in the Mermaid format using a UML class diagram. ## Ground Rules - Every entity must have the following attributes: - id (string) - createdAt (string, ISO 8601 format) - updatedAt (string, ISO 8601 format) - Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes. - Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. - Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be "1" -- "0..*", it must be "0..1" -- "0..*"). - Use only valid syntax in the Mermaid diagram. - Do not include enumerations in the Mermaid diagram. - Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file). ## Naming Conventions - Names should reveal intent and purpose. - Use PascalCase for entity names. - Use camelCase for attributes and relationships. - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). ## Final Instructions - **No Assumptions: Base every detail on visual evidence in the sketch, not on common design patterns. - **Double-Check: After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content for Model.md. Appendix 2: Sketch to DAL Spec + You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - Model.md: the conceptual model - Sketch.png: the UI sketch There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices - Zustand-guidelines.md: Zustand Best Practices **Step 3:** Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations. --- ## Markdown Output Structure Use this template for the entire document. markdown # Data Access Layer Specification This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`. ## 1. Type Definitions Location: `src/types/entities.ts` ### 1.1. `BaseEntity` A shared interface that all entities should extend. [TypeScript interface definition] ### 1.2. `[Entity Name]` The interface for the [Entity Name] entity. [TypeScript interface definition] ## 2. Zustand Stores ### 2.1. Store for `[Entity Name]` **Location:** `src/stores/[Entity Name (plural)].ts` The Zustand store will manage the state of all [Entity Name] items. **Store State (`[Entity Name]State`):** [TypeScript interface definition] **Store Implementation (`use[Entity Name]Store`):** - The store will be created using `create<[Entity Name]State>()(...)`. - It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`. - `[Entity Name (plural, camelCase)]` will be a dictionary (`Record<string, [Entity]>`) for O(1) access. **Actions:** - **`add[Entity Name]`**: [Define the operation behavior based on entity requirements] - **`update[Entity Name]`**: [Define the operation behavior based on entity requirements] - **`remove[Entity Name]`**: [Define the operation behavior based on entity requirements] - **`doSomethingElseWith[Entity Name]`**: [Define the operation behavior based on entity requirements] ## 3. Custom Hooks ### 3.1. `use[Entity Name (plural)]` **Location:** `src/hooks/use[Entity Name (plural)].ts` The hook will be the primary interface for UI components to interact with [Entity Name] data. **Hook Return Value:** [TypeScript interface definition] **Hook Implementation:** [List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.] --- ## Final Instructions - **No Assumptions:** Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content for DAL.md. Appendix 3: Sketch to UI Spec + You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - Sketch.png: the UI sketch - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly. - Model.md: the conceptual model - DAL.md: the Data Access Layer spec There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices **Step 3:** Generate the complete markdown content for a new file, UI.md. --- ## Markdown Output Structure Use this template for the entire document. markdown # UI Layer Specification This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`. ## 1. High-Level Structure The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. ### 1.1. `App` Component The root component that sets up routing and global providers. - **Location**: `src/App.tsx` - **Purpose**: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page. - **Composition**: - Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`. - Renders `[Page Name]`. ## 2. Pages ### 2.1. `[Page Name]` - **Location:** `src/pages/PageName.tsx` - **Purpose:** [Briefly describe the main goal and function of this page] - **Data Access:** [List the specific hooks and functions this component uses to fetch or manage its data] - **Internal State:** [Describe any state managed internally by this page using `useState`] - **Composition:** [Briefly describe the content of this page] - **User Interactions:** [Describe how the user interacts with this page] - **Logic:** [If applicable, provide additional comments on how this page should work] ## 3. Components ### 3.1. `[Component Name]` - **Location:** `src/components/ComponentName.tsx` - **Purpose:** [Explain what this component does and where it's used] - **Props:** [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.] - **Data Access:** [List the specific hooks and functions this component uses to fetch or manage its data] - **Internal State:** [Describe any state managed internally by this component using `useState`] - **Composition:** [Briefly describe the content of this component] - **User Interactions:** [Describe how the user interacts with the component] - **Logic:** [If applicable, provide additional comments on how this component should work] --- ## Final Instructions - **No Assumptions:** Base every detail on the visual evidence in the sketch, not on common design patterns. - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content for UI.md. Appendix 4: DAL Spec to Plan + You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices - Zustand-guidelines.md: Zustand Best Practices **Step 3:** Create a step-by-step plan to build a Data Access Layer according to the spec. Each task should: - Focus on one concern - Be reasonably small - Have a clear start + end - Contain clearly defined Objectives and Acceptance Criteria The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between. ## Final Instructions - Note that we are not starting from scratch; the basic template has already been created using Vite. - Do not add redundant empty lines between items. Your final output should be the complete, raw markdown content for DAL-plan.md. Appendix 5: UI Spec to Plan + You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter. - Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices **Step 3:** Create a step-by-step plan to build a UI layer according to the spec and the sketch. Each task must: - Focus on one concern. - Be reasonably small. - Have a clear start + end. - Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding. - Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan. I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between. ## Final Instructions - Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully. - For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store dataâdo not use mock data (note that the Data Access Layer has been already built successfully). - The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks. - Do not add redundant empty lines between items. Your final output should be the complete, raw markdown content for UI-plan.md. Appendix 6: DAL Plan to Code + You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices - @docs/guidelines/React-guidelines.md: React Best Practices - @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices **Step 3:** Read the plan: - @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application. **Step 4:** Build a Data Access Layer for this application according to the spec and following the plan. - Complete one task from the plan at a time. - After each task, stop, so that I can test it. Donât move to the next task before I tell you to do so. - Do not do anything else. At this point, we are focused on building the Data Access Layer. ## Final Instructions - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. - Do not start the development server, I'll do it by myself. Appendix 7: UI Plan to Code + You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter. - @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible. - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices - @docs/guidelines/React-guidelines.md: React Best Practices **Step 3:** Read the plan: - @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application. **Step 4:** Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: - Complete one task from the plan at a time. - Make sure you build the UI according to the sketch; this is very important. - After each task, stop, so that I can test it. Donât move to the next task before I tell you to do so. ## Final Instructions - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. - Follow Ant Design's default styles and components. - Do not touch the data access layer: it's ready and it's perfect. - Do not start the development server, I'll do it by myself. Appendix 8: TS-guidelines.md + # Guidelines: TypeScript Best Practices ## Type System & Type Safety - Use TypeScript for all code and enable strict mode. - Ensure complete type safety throughout stores, hooks, and component interfaces. - Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types. - Entity interfaces should extend common patterns while maintaining their specific properties. - Use TypeScript type guards in filtering operations for relationship safety. - Avoid the 'any' type; prefer 'unknown' when necessary. - Use generics to create reusable components and functions. - Utilize TypeScript's features to enforce type safety. - Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled. - Avoid enums; use maps instead. ## Naming Conventions - Names should reveal intent and purpose. - Use PascalCase for component names and types/interfaces. - Prefix interfaces for React props with 'Props' (e.g., ButtonProps). - Use camelCase for variables and functions. - Use UPPER_CASE for constants. - Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx). - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). - Favor named exports for components. ## Code Structure & Patterns - Write concise, technical TypeScript code with accurate examples. - Use functional and declarative programming patterns; avoid classes. - Prefer iteration and modularization over code duplication. - Use the "function" keyword for pure functions. - Use curly braces for all conditionals for consistency and clarity. - Structure files appropriately based on their purpose. - Keep related code together and encapsulate implementation details. ## Performance & Error Handling - Use immutable and efficient data structures and algorithms. - Create custom error types for domain-specific errors. - Use try-catch blocks with typed catch clauses. - Handle Promise rejections and async errors properly. - Log errors appropriately and handle edge cases gracefully. ## Project Organization - Place shared types in a types directory. - Use barrel exports (index.ts) for organizing exports. - Structure files and directories based on their purpose. ## Other Rules - Use comments to explain complex logic or non-obvious decisions. - Follow the single responsibility principle: each function should do exactly one thing. - Follow the DRY (Don't Repeat Yourself) principle. - Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only. - Use 2 spaces for indentation (no tabs). Appendix 9: React-guidelines.md + # Guidelines: React Best Practices ## Component Structure - Use functional components over class components - Keep components small and focused - Extract reusable logic into custom hooks - Use composition over inheritance - Implement proper prop types with TypeScript - Structure React files: exported component, subcomponents, helpers, static content, types - Use declarative TSX for React components - Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components ## React Patterns - Utilize useState and useEffect hooks for state and side effects - Use React.memo for performance optimization when needed - Utilize React.lazy and Suspense for code-splitting - Implement error boundaries for robust error handling - Keep styles close to components ## React Performance - Avoid unnecessary re-renders - Lazy load components and images when possible - Implement efficient state management - Optimize rendering strategies - Optimize network requests - Employ memoization techniques (e.g., React.memo, useMemo, useCallback) ## React Project Structure /src - /components - UI components (every component in a separate file) - /hooks - public-facing custom hooks (every hook in a separate file) - /providers - React context providers (every provider in a separate file) - /pages - page components (every page in a separate file) - /stores - entity-specific Zustand stores (every store in a separate file) - /styles - global styles (if needed) - /types - shared TypeScript types and interfaces Appendix 10: Zustand-guidelines.md + # Guidelines: Zustand Best Practices ## Core Principles - **Implement a data layer** for this React application following this specification carefully and to the letter. - **Complete separation of concerns**: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic. - **Shared state architecture**: Different UI components should work with the same shared state, despite using entity-specific hooks separately. ## Technology Stack - **State management**: Use Zustand for state management with automatic localStorage persistence via the persist middleware. ## Store Architecture - **Base entity:** Implement a BaseEntity interface with common properties that all entities extend: typescript export interface BaseEntity { id: string; createdAt: string; // ISO 8601 format updatedAt: string; // ISO 8601 format } - **Entity-specific stores**: Create separate Zustand stores for each entity type. - **Dictionary-based storage**: Use dictionary/map structures (Record<string, Entity>) rather than arrays for O(1) access by ID. - **Handle relationships**: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate. ## Hook Layer The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities. ### Core Principles 1. **One Hook Per Entity**: There will be a single, comprehensive custom hook for each entity (e.g., useBlogPosts, useCategories). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created. 2. **Return reactive data, not getter functions**: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes. 3. **Expose Dictionaries for O(1) Access**: To provide simple and direct access to data, every hook will return a dictionary (Record<string, Entity>) of the relevant items. ### The Standard Hook Pattern Every entity hook will follow this implementation pattern: 1. **Subscribe** to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data. 2. **Filter** the data based on the parameters passed into the hook. This logic will be memoized with useMemo for efficiency. If no parameters are provided, the hook will operate on the entire dataset. 3. **Return a Consistent Shape**: The hook will always return an object containing: * A **filtered and sorted array** (e.g., blogPosts) for rendering lists. * A **filtered dictionary** (e.g., blogPostsDict) for convenient O(1) lookup within the component. * All necessary **action functions** (add, update, remove) and **relationship operations**. * All necessary **helper functions** and **derived data objects**. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes. ## API Design Standards - **Object Parameters**: Use object parameters instead of multiple direct parameters for better extensibility: typescript // â Preferred add({ title, categoryIds }) // â Avoid add(title, categoryIds) - **Internal Methods**: Use underscore-prefixed methods for cross-store operations to maintain clean separation. ## State Validation Standards - **Existence checks**: All update and remove operations should validate entity existence before proceeding. - **Relationship validation**: Verify both entities exist before establishing relationships between them. ## Error Handling Patterns - **Operation failures**: Define behavior when operations fail (e.g., updating non-existent entities). - **Graceful degradation**: How to handle missing related entities in helper functions. ## Other Standards - **Secure ID generation**: Use crypto.randomUUID() for entity ID generation instead of custom implementations for better uniqueness guarantees and security. - **Return type consistency**: add operations return generated IDs for component workflows requiring immediate entity access, while update and remove operations return void to maintain clean modification APIs.
Shades Of October (2025 Wallpapers Edition)
How about some new wallpapers to get your desktop ready for fall and the upcoming Halloween season? Weâve got you covered! Following our monthly tradition, the wallpapers in this post were created with love by the community for the community and can be downloaded for free. Enjoy!
As September comes to a close and October takes over, we are in the midst of a time of transition. The air in the morning feels crisper, the leaves are changing colors, and winding down with a warm cup of tea regains its almost-forgotten appeal after a busy summer. When we look closely, October is full of little moments that have the power to inspire, and whatever your secret to finding new inspiration might be, our monthly wallpapers series is bound to give you a little inspiration boost, too. For this October edition, artists and designers from across the globe once again challenged their creative skills and designed wallpapers to spark your imagination. You find them compiled below, along with a selection of timeless October treasures from our wallpapers archives that are just too good to gather dust. A huge thank you to everyone who shared their designs with us this month â this post wouldnât exist without your creativity and kind support! Happy October! You can click on every image to see a larger preview. We respect and carefully consider the ideas and motivation behind each and every artistâs work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werenât anyhow influenced by us but rather designed from scratch by the artists themselves. Submit your wallpaper design! đ©âđš Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ⏠Midnight Mischief Designed by Libra Fire from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 AI Designed by Ricardo Gimenes from Spain. preview with calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Glowing Pumpkin Lanterns âI was inspired by the classic orange and purple colors of October and Halloween, and wanted to combine those two themes to create a fun pumpkin lantern background.â â Designed by Melissa Bostjancic from New Jersey, United States. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Halloween 2040 Designed by Ricardo Gimenes from Spain. preview with calendar: 640x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 without calendar: 640x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 When The Mind Opens âIn October, we observe World Mental Health Day. The open window in the head symbolizes light and fresh thoughts, the plant represents quiet inner growth and resilience, and the bird brings freedom and connection with the world. Together, they create an image of a mind that breathes, grows, and remains open to new beginnings.â â Designed by Ginger IT Solutions from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Enter The Factory âI took this photo while visiting an old factory. The red light was astonishing.â â Designed by Philippe Brouard from France. preview with calendar: 1024x768, 1366x768, 1600x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600, 2880x1800, 3840x2160 without calendar: 1024x768, 1366x768, 1600x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600, 2880x1800, 3840x2160 The Crow And The Ghosts âIf my heart were a season, it would be autumn.â â Designed by LĂvia LĂ©nĂĄrt from Hungary. preview without calendar: 320x480, 1024x1024, 1280x1024, 1600x1200, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 The Night Drive Designed by Vlad Gerasimov from Georgia. preview without calendar: 800x480, 800x600, 1024x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1440x900, 1440x960, 1400x1050, 1600x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600, 2880x1800, 3072x1920, 3840x2160, 5120x2880 Spooky Town Designed by Xenia Latii from Germany. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Bird Migration Portal âWhen I was young, I had a birdâs nest not so far from my room window. I watched the birds almost every day; because those swallows always left their nests in October. As a child, I dreamt that they all flew together to a nicer place, where they were not so cold.â â Designed by Eline Claeys from Belgium. preview without calendar: 1280x1024, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Hanlu âThe term âHanluâ literally translates as âCold Dew.â The cold dew brings brisk mornings and evenings. Eventually the briskness will turn cold, as winter is coming soon. And chrysanthemum is the iconic flower of Cold Dew.â â Designed by Hong, ZI-Qing from Taiwan. preview without calendar: 640x480, 800x600, 1024x768, 1080x1920, 1152x864, 1280x720, 1280x960, 1366x768, 1400x1050, 1600x1200, 1920x1080, 1920x1440, 2560x1440 Autumnâs Splendor âThe transition to autumn brings forth a rich visual tapestry of warm colors and falling leaves, making it a natural choice for a wallpaper theme.â â Designed by Farhan Srambiyan from India. preview without calendar: 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Ghostbusters Designed by Ricardo Gimenes from Spain. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Hello Autumn âDid you know that squirrels donât just eat nuts? They really like to eat fruit, too. Since apples are the seasonal fruit of October, I decided to combine both things into a beautiful image.â â Designed by Erin Troch from Belgium. preview without calendar: 320x480, 800x480, 1024x1024, 1280x800, 1366x768, 1600x1200, 1680x1050, 1680x1200, 1920x1440, 2560x1440 Discovering The Universe âAutumn is the best moment for discovering the universe. I am looking for a new galaxy or maybe⊠a UFO!â â Designed by VerĂłnica Valenzuela from Spain. preview without calendar: 800x480, 1024x768, 1152x864, 1280x800, 1280x960, 1440x900, 1680x1200, 1920x1080, 2560x1440 The Return Of The Living Dead Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Goddess Makosh âAt the end of the kolodar, as everything begins to ripen, the village sets out to harvesting. Together with the farmers goes Makosh, the Goddess of fields and crops, ensuring a prosperous harvest. What she gave her life and health all year round is now mature and rich, thus, as a sign of gratitude, the girls bring her bread and wine. The beautiful game of the goddess makes the hard harvest easier, while the song of the farmer permeates the field.â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Strange October Journey âOctober makes the leaves fall to cover the land with lovely auburn colors and brings out all types of weird with them.â â Designed by Mi Ni Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Autumn Deer Designed by Amy Hamilton from Canada. preview without calendar: 1024x768, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1920x1080, 1920x1200, 2048x1536, 2560x1440, 2880x1800 Transitions âTo me, October is a transitional month. We gradually slide from summer to autumn. Thatâs why I chose to use a lot of gradients. I also wanted to work with simple shapes, because I think of October as the âback to nature/back to basics monthâ.â â Designed by Jelle Denturck from Belgium. preview without calendar: 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2880x1800 Happy Fall! âFall is my favorite season!â â Designed by Thuy Truong from the United States. preview without calendar: 320x480, 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x800, 1366x768, 1440x900, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Roger That Rogue Rover âThe story is a mash-up of retro science fiction and zombie infection. What would happen if a Mars rover came into contact with an unknown Martian material and got infected with a virus? What if it reversed its intended purpose of research and exploration? Instead choosing a life of chaos and evil. What if they all ran rogue on Mars? Would humans ever dare to voyage to the red planet?â Designed by Frank Candamil from the United States. preview without calendar: 1024x768, 1024x1024, 1280x800, 1680x1050, 1920x1080, 1920x1200, 2560x1440 Turtles In Space âFinished September, with October comes the month of routines. This year we share it with turtles that explore space.â â Designed by Veronica Valenzuela from Spain. preview without calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440 First Scarf And The Beach âWhen I was little, my parents always took me and my sister for a walk at the beach in Nieuwpoort. We didn't really do those beach walks in the summer but always when the sky started to turn gray and the days became colder. My sister and I always took out our warmest scarfs and played in the sand while my parents walked behind us. I really loved those Saturday or Sunday mornings where we were all together. I think October (when itâs not raining) is the perfect month to go to the beach for âuitwaaienâ (to blow out), to walk in the wind and take a break and clear your head, relieve the stress or forget oneâs problems.â â Designed by Gwen Bogaert from Belgium. preview without calendar: 320x480, 1280x1024, 2560x1440, 2880x1800 Shades Of Gold âWe are about to experience the magical imagery of nature, with all the yellows, ochers, oranges, and reds coming our way this fall. With all the subtle sunrises and the burning sunsets before us, we feel so joyful that we are going to shout it out to the world from the top of the mountains.â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Autumn Vibes âAutumn has come, the time of long walks in the rain, weekends spent with loved ones, with hot drinks, and a lot of tenderness. Enjoy.â â Designed by LibraFire from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Game Night And Hot Chocolate âTo me, October is all about cozy evenings with hot chocolate, freshly baked cookies, and a game night with friends or family.â â Designed by Lieselot Geirnaert from Belgium. preview without calendar: 320x480, 1024x1024, 1280x1024, 2560x1440 Haunted House âLove all the Halloween costumes and decorations!â â Designed by Tazi from Australia. preview without calendar: 320x480, 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x960, 1600x1200, 1920x1080, 1920x1440, 2560x1440 Say Bye To Summer âAnd hello to autumn! The summer heat and high season is over. Itâs time to pack our backpacks and head for the mountains â there are many treasures waiting to be discovered!â Designed by Agnes Sobon from Poland. preview without calendar: 1280x800, 1440x900, 1680x1050, 1920x1080, 1920x1200, 2560x1440 Tea And Cookies âAs it gets colder outside, all I want to do is stay inside with a big pot of tea, eat cookies and read or watch a movie, wrapped in a blanket. Is it just me?â â Designed by Miruna Sfia from Romania. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1440x900, 1440x1050, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 The Return Designed by Ricardo Gimenes from Spain. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Boo! Designed by Mad Fish Digital from Portland, OR. preview without calendar: 320x480, 1024x1024, 1280x720, 1680x1200, 1920x1080, 2560x1440 Trick Or Treat âHave you ever wondered if all the little creatures of the animal kingdom celebrate Halloween as humans do? My answer is definitely âYES! They do!â They use acorns as baskets to collect all the treats, pastry brushes as brooms for the spookiest witches and hats made from the tips set of your pastry bag. So, if you happen to miss something from your kitchen or from your tool box, it may be one of them, trying to get ready for All Hallowsâ Eve.â â Designed by Carla Dipasquale from Italy. preview without calendar: 640x480, 800x600, 1024x768, 1280x960, 1440x900, 1600x1200, 1680x1200, 1920x1080, 1920x1440, 2560x1440 Dope Code âOctober is the month when the weather in Poland starts to get colder, and it gets very rainy, too. You canât always spend your free time outside, so itâs the perfect opportunity to get some hot coffee and work on your next cool web project!â â Designed by Robert Brodziak from Poland. preview without calendar: 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Happy Halloween Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Ghostober Designed by Ricardo Delgado from Mexico City. preview without calendar: 1024x768, 1024x1024, 1280x800, 1280x1024, 2560x1440 Get Featured Next Month Would you like to get featured in our next wallpapers post? Weâll publish the November wallpapers on October 31, so if youâd like to be a part of the collection, please donât hesitate to submit your design. We canât wait to see what youâll come up with!
From Prompt To Partner: Designing Your Custom AI Assistant
What if your best AI prompts didnât disappear into your unorganized chat history, but came back tomorrow as a reliable assistant? In this article, youâll learn how to turn one-off âahaâ prompts into reusable assistants that are tailored to your audience, grounded in your knowledge, and consistent every time, saving you (and your team) from typing the same 448-word prompt ever again.
In âA Week In The Life Of An AI-Augmented Designerâ, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In âPrompting Is A Design Actâ, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now weâll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share. AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function â allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team. Why Build Your Own Assistant? If youâve ever copied and pasted the same mega-prompt for the nth time, youâve experienced the pain. An AI assistant turns a one-off âgreat promptâ into a dependable teammate. And if youâve used any of the publicly available AI Assistants, youâve realized quickly that theyâre usually generic and not tailored for your use. Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in your voice, with your context and constraints baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster. Benefits Of Reusing Prompts, Even Your Own Some of the benefits of building your own AI Assistant over writing or reusing your prompts include: Focused on a real repeating problem A good AI Assistant isnât a general-purpose âdo everythingâ bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on whoâs doing it (e.g., analyzing customer feedback). Customized for your context Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI. Consistency at scale You can use the WIRE+FRAME prompt framework to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time. Codifying expertise Every time you turn a great prompt into an AI Assistant, youâre essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes). Faster ramp-up for teammates Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture. Reasons For Your Own AI Assistant Instead Of Public AI Assistants Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own. A few reasons for building your AI Assistant instead of using a public assistant someone else created include: Fit: Public assistants are built for the masses. Your work has quirks, tone, and processes theyâll never quite match. Trust & Security: You donât control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and wonât) do. Evolution: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog â things a public bot wonât do for you. Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your teamâs way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply: Donât share anything you wouldnât want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data. Note: We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the modelâs training, mood, and flair for creative hallucinations. When Not to Build An AI Assistant (Yet) An AI Assistant is great when the same audience has the same problem often. When the fit isnât there, the risk is high; you should skip building an AI Assistant for now, as explained below: One-off or rare tasks If it wonât be reused at least monthly, Iâd recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen. Sensitive or regulated data If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, Iâd strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance). Heavy orchestration or logic Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). Iâd recommend not trying to build an AI Assistant for these cases. Real-time information AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web. High-stakes outputs For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability. No measurable win If you canât name a success metric (such as time saved, first-draft quality, or fewer re-dos), Iâd recommend keeping it as a saved WIRE+FRAME prompt. Just because these are signs that you should not build your AI Assistant now, doesnât mean you shouldnât ever. Revisit this decision when you notice that youâre starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly. In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed. As Always, Start with the User This should go without saying to UX professionals, but itâs worth a reminder: if youâre building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything. Who will use this assistant? Whatâs the specific pain or task they struggle with today? What language, tone, and examples will feel natural to them? Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable. From Prompt To Assistant Youâve already done the heavy lifting with WIRE+FRAME. Now youâre just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant. M: Map your prompt Port your successful WIRE+FRAME prompt into the AI assistant. A: Add knowledge and training Ground the assistant in your world. Upload knowledge files, examples, or guides that make it uniquely yours. T: Tailor for audience Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience. C: Check, test, and refine Test the preview with different inputs and refine until you get the results you expect. H: Hand off and maintain Set sharing options and permissions, share the link, and maintain it. A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were: Prototype Prodigy: Transform rough ideas into prototypes and export them into Figma to refine. Critique Coach: Review wireframes or mockups and point out accessibility and usability gaps. But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: âAn assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.â And thatâs the one we will build in this article â say hello to Insight Interpreter. Walkthrough: Insight Interpreter Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and itâs nobodyâs idea of fun. Hereâs where an AI assistant like the Insight Interpreter can help. Weâll turn the example prompt created using the WIRE+FRAME framework in Prompting Is A Design Act into a CustomGPT. When you start building a CustomGPT by visiting https://chat.openai.com/gpts/editor, youâll see two paths: Conversational interface Vibe-chat your way â itâs easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions. Configure interface The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option youâll want for assistants you plan to share or depend on regularly. The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and weâll walk through using it in configure mode as a more formal checklist in this article. M: Map Your Prompt Paste your full WIRE+FRAME prompt into the Instructions section exactly as written. As a refresher, Iâve included the mapping and snippets of the detailed prompt from before: Who & What: The AI persona and the core deliverable (ââŠsenior UX researcher and customer insights analyst⊠specialize in synthesizing qualitative data from diverse sourcesâŠâ). Input Context: Background or data scope to frame the task (ââŠanalyzing customer feedback uploaded from sources such asâŠâ). Rules & Constraints: Boundaries (ââŠdo not fabricate pain points, representative quotes, journey stages, or patternsâŠâ). Expected Output: Format and fields of the deliverable (ââŠa structured list of themes. For each theme, includeâŠâ). Flow: Explicit, ordered sub-tasks (âRecommended flow of tasks: Step 1âŠâ). Reference Voice: Tone, mood, or reference (ââŠconcise, pattern-driven, and objectiveâŠâ). Ask for Clarification: Ask questions if unclear (ââŠif data is missing or unclear, ask before continuingâŠâ). Memory: Memory to recall earlier definitions (âUnless explicitly instructed otherwise, keep using this processâŠâ). Evaluate & Iterate: Have the AI self-critique outputs (ââŠcritically evaluateâŠsuggest improvementsâŠâ). If youâre building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective Instructions sections. A: Add Knowledge And Training In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: reviews_Q2_2025.csv beats latestfile_final2.csv. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include: Taxonomy of themes; Instructions on parsing uploaded data; Examples of real UX research reports using this structure; Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity; Customer journey map stages; Customer feedback file templates (not actual data). An example of a file to help it parse uploaded data is shown below: T: Tailor For Audience Audience tailoring If you are building this for others, your prompt should have addressed tone in the âReference Voiceâ section. If you didnât, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the Conversation starters section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use âAnalyze feedback from the attached fileâ for our Insights Interpreter to make it more self-explanatory for anyone, instead of âAnalyze data,â which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use âWhat are the types of curiosity?â and âGive me a micro-practice to spark curiosityâ. Functional tailoring Fill in the CustomGPT name, icon, description, and capabilities. Name: Pick one that will make it clear what the CustomGPT does. Letâs use âInsights Interpreter â Customer Feedback Analyzerâ. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable. Icon: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar. Description: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar. Recommended Model: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model. Capabilities: Turn off anything you wonât need. Weâll turn off âWeb Searchâ to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on âCode Interpreter & Data Analysisâ to allow it to understand and process uploaded files. âCanvasâ allows users to work on a shared canvas with the GPT to edit writing tasks; âImage generationâ - if the CustomGPT needs to create images. Actions: Making third-party APIs available to the CustomGPT, advanced functionality we donât need. Additional Settings: Sneakily hidden and opted in by default, I opt out of training OpenAIâs models. C: Check, Test & Refine Do one last visual check to make sure youâve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing. Use the Preview panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesnât now, check whether new instructions or knowledge files are overriding it. When things donât look right, here are quick debugging fixes: Generic answers? Tighten Input Context or update the knowledge files. Hallucinations? Revisit your Rules section. Turn off web browsing if you donât need external data. Wrong tone? Strengthen Reference Voice or swap in clearer examples. Inconsistent? Test across models in preview and set the most reliable one as âRecommended.â H: Hand Off And Maintain When your CustomGPT is ready, you can publish it via the âCreateâ option. Select the appropriate access option: Only me: Private use. Perfect if youâre still experimenting or keeping it personal. Anyone with the link: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public. GPT Store: Fully public. Your assistant is listed and findable by anyone browsing the store. (This is the option weâll use.) Business workspace (if youâre on GPT Business): Share with others within your business account only â the easiest way to keep it in-house and controlled. But hand off doesnât end with hitting publish, you should maintain it to keep it relevant and useful: Collect feedback: Ask teammates what worked, what didnât, and what they had to fix manually. Iterate: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: https://chatgpt.com/gpts/mine Track changes: Keep a simple changelog (date, version, updates) for traceability. Refresh knowledge: Update knowledge files and examples on a regular cadence so answers donât go stale. And thatâs it! Our Insights Interpreter is now live! Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs: The results are similar, with slight differences, and thatâs expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance. Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you wonât get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording. While Iâd love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs â including the tone, context, output formats, and get the real AI Assistant you need! Inspiration For Other AI Assistants We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant: Workshop Wizard: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts. Research Roundup Buddy: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs. Persona Refresher: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual). Content Checker: Proofs copy for tone, accessibility, and reading level before it ever hits your site. Trend Tamer: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap. Microcopy Provocateur: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions. Ethical UX Debater: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user. The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants. Ask Me Anything About Assistants What are some limitations of a CustomGPT? Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining. Can I copy someone elseâs public CustomGPT and tweak it? Not directly, but if you get inspired by another CustomGPT, you can look at how itâs framed and rebuild your own using WIRE+FRAME & MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Googleâs equivalent â Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem. How private are my uploaded files? The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didnât disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Donât upload sensitive, confidential, or personal data you wouldnât want circulating. Enterprise accounts do have some protections, so check with your company. How many files can I upload, and does size matter? Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think âchapterâ instead of âentire book.â At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10. Whatâs the difference between a CustomGPT and a Project? A CustomGPT is a focused assistant, like an intern trained to do one role well (like âInsight Interpreterâ). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit. From Reading To Building In this AI x Design series, weâve gone from messy prompting (âA Week In The Life Of An AI-Augmented Designerâ) to a structured prompt framework, WIRE+FRAME (âPrompting Is A Design Actâ). And now, in this article, your very own reusable AI sidekick. CustomGPTs donât replace designers but augment them. The real magic isnât in the tool itself, but in how you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They extend your craft, codify your expertise, and give your team leverage that generic AI models canât. Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.
Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)
Yegor Gilyov examines the problem of over-reliance on static high-fidelity mockups, which often leave the conceptual model and user flows dangerously underdeveloped. He then explores whether AI-powered prototyping is the answer, questioning whether the path forward is the popular âvibe codingâ approach or a more structured, intent-driven approach.
There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the âskepticalâ end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, Iâve been doing UX and product design for about 25 years now, and Iâm always keen to learn new tricks and share them with colleagues. Right now, Iâm interested in AI-assisted prototyping, and Iâm here to share my thoughts on how it can change the process of designing digital products. To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think itâs the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery. How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, Iâll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. Thereâs no doubt that software engineering in the AI era is undergoing dramatic changes, but Iâm not an engineer â Iâm a designer, so let me focus on my âsweet spotâ. And my âsweet spotâ has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens. Iâll start by outlining the typical artifacts designers create during Solution Discovery. Then, Iâll examine the problems with how this part of the process often plays out in practice. Finally, weâll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call âvibe coding,â or calls for a more deliberate and disciplined way of working. What We Create During Solution Discovery The Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept. Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details. Itâs important to think holistically, considering different aspects of the solution. I would highlight three dimensions: Conceptual model: Objects, relations, attributes, actions; Visualization: Screens, from rough sketches to hi-fi mockups; Flow: From the very high-level user journeys to more detailed ones. One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times. This is how different types of design artifacts map to these dimensions: As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging. Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow. This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process. The Problem With The Mockup-Centric Approach Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized âbento boxâ of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups. This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them. On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board. Itâs no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have. As a result, validation with users doesnât provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designersâ attention has been heavily skewed toward visualization. The result is often a design artifact that resembles the famous âhorse drawingâ meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows). While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the âlopsided horseâ problem is unlikely to be fatal. However, itâs different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster. Transforming The Design Process This situation makes me wonder: How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one? If we were able to answer this question, we would: Learn faster. By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days. Gain more confidence. Users interact with real logic, which gives us more proof that the idea works. Enforce conceptual clarity. A live prototype cannot hide a flawed or ambiguous conceptual model. Establish a clear and lasting source of truth. A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification. Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way. The Seductive Promise Of âVibe Codingâ And the answer seems to be obvious: vibe coding! âVibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.â â Wikipedia The original tweet by Andrej Karpathy: The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it? The Trap: A Process Without A Blueprint Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model. The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation. This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you canât be sure that it can hold weight. Iâll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathyâs tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally donât disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt: I need an app to track tests. For every test, I need to fill out the following data: - Hypothesis (we believe that...) - Experiment (to verify that, we will...) - When (a single date, or a period) - Status (New/Planned/In Progress/Proven/Disproven) And in a minute or so, I get a working prototype: Inspired by success, I go further: Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status. And the result is still pretty good: But then I want to extend the functionality related to product ideas: Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests. And from this point on, the results are getting more and more confusing. The flow of creating tests hasnât changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click âProduct Ideasâ in the top navigation, I see nothing: I need to create my ideas from scratch, and they are not connected to the tests I created before: Moreover, when I go back to âTestsâ, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that: No, this is not expected behavior â itâs a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page donât sync with the product ideas page. Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face. Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didnât go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well. For example, you can still create âorphanâ tests that are not connected to any item from the âProduct Ideasâ page. As a result, you may end up with different numbers of ideas and tests on different pages of the app: Letâs diagnose what really happened here. The AIâs response that this is a âbugâ is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages. Donât take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about âgarbage in, garbage outâ is still valid. If you donât express your intent clearly enough, chances are the result wonât fulfill your expectations. Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all thatâs left is the end result. This makes the development team âcode archaeologists,â who have to figure out what the designer was thinking by reverse-engineering the AIâs code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty. From Fast Magic To A Solid Foundation Pure vibe coding, for all its allure, encourages building without a blueprint. As weâve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off. This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process. In Part 2 of this series, âA Practical Guide to Building with Clarityâ, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding. Thank you for reading, and I look forward to seeing you in Part 2.
Ambient Animations In Web Design: Principles And Implementation (Part 1)
Creating motion can be tricky. Too much and itâs distracting. Too little and a design feels flat. Ambient animations are the middle ground â subtle, slow-moving details that add atmosphere without stealing the show. In this article, web design pioneer Andy Clarke introduces the concept of ambient animations and explains how to implement them.
Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways. In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly. Ambient animations arenât intrusive; they donât demand attention, arenât distracting, and donât interfere with what someoneâs trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brandâs personality. To illustrate the concept of ambient animations, Iâve recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didnât move, making them ideal candidates to transform into ambient animations. FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters. Tip: You can view the complete ambient animation code on CodePen. Choosing Elements To Animate Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesnât belong. Natural Motion Cues When Iâm deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: âDoes this thing have weight?â, âIs it flexible?â, and âWould it move in real life?â If the answerâs âyes,â itâll probably feel right if it moves. There are several motion cues in Ray Dirgoâs cover artwork. For example, the peace pipe Quick Drawâs puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would. #quick-draw-pipe { animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate; } @keyframes quick-draw-pipe-rotate { 0% { transform: rotate(3deg); } 100% { transform: rotate(-3deg); } } #quick-draw-feather-1 { animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate; } #quick-draw-feather-2 { animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate; } @keyframes quick-draw-feather-1-rotate { 0% { transform: rotate(3deg); } 100% { transform: rotate(-3deg); } } @keyframes quick-draw-feather-2-rotate { 0% { transform: rotate(-3deg); } 100% { transform: rotate(3deg); } } Atmosphere, Not Action I often choose elements or decorative details that add to the vibe but donât fight for attention. Ambient animations arenât about signalling to someone where they should look; theyâre about creating a mood. Here, the chief slowly and subtly rises and falls as he puffs on his pipe. #chief { animation: chief-rise-fall 3s ease-in-out infinite alternate; } @keyframes chief-group-rise-fall { 0% { transform: translateY(0); } 100% { transform: translateY(-20px); } } For added effect, the feather on his head also moves in time with his rise and fall: #chief-feather-1 { animation: chief-feather-1-rotate 3s ease-in-out infinite alternate; } #chief-feather-2 { animation: chief-feather-2-rotate 3s ease-in-out infinite alternate; } @keyframes chief-feather-1-rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(-9deg); } } @keyframes chief-feather-2-rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(9deg); } } Playfulness And Fun One of the things I love most about ambient animations is how they bring fun into a design. Theyâre an opportunity to demonstrate personality through playful details that make people smile when they notice them. Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Drawâs eyebrows also bounce at what look like random intervals. #quick-draw-eyebrow { animation: quick-draw-eyebrow-raise 5s ease-in-out infinite; } @keyframes quick-draw-eyebrow-raise { 0%, 20%, 60%, 100% { transform: translateY(0); } 10%, 50%, 80% { transform: translateY(-10px); } } Keep Hierarchy In Mind Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact. Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements â including his pipe and its feathers â within a new SVG group, and then I made that wobble. #quick-draw-group { animation: quick-draw-group-wobble 6s ease-in-out infinite; } @keyframes quick-draw-group-wobble { 0% { transform: rotate(0deg); } 15% { transform: rotate(2deg); } 30% { transform: rotate(-2deg); } 45% { transform: rotate(1deg); } 60% { transform: rotate(-1deg); } 75% { transform: rotate(0.5deg); } 100% { transform: rotate(0deg); } } Then, to emphasise this motion, I mirrored those values to wobble his shadow: #quick-draw-shadow { animation: quick-draw-shadow-wobble 6s ease-in-out infinite; } @keyframes quick-draw-shadow-wobble { 0% { transform: rotate(0deg); } 15% { transform: rotate(-2deg); } 30% { transform: rotate(2deg); } 45% { transform: rotate(-1deg); } 60% { transform: rotate(1deg); } 75% { transform: rotate(-0.5deg); } 100% { transform: rotate(0deg); } } Apply Restraint Just because something can be animated doesnât mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: âWhatâs the story Iâm telling? Where does movement help, and when might it become distracting?â Remember, restraint isnât just about doing less; itâs about doing the right things less often. Layering SVGs For Export In âSmashing Animations Part 4: Optimising SVGs,â I wrote about the process I rely on to âprepare, optimise, and structure SVGs for animation.â When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack. Thatâs why I develop my SVGs in layers, exporting and optimising one set of elements at a time â always in the order theyâll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file. Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw. Before finally exporting, naming, and adding details, like Quick Drawâs pipe, eyes, and his stoned sparkles. Since I export each layer from the same-sized artboard, I donât need to worry about alignment or positioning issues as they all slot into place automatically. Implementing Ambient Animations You donât need an animation framework or library to add ambient animations to a project. Most of the time, all youâll need is a well-prepared SVG and some thoughtful CSS. But, letâs start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can. Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember Iâm not animating for attention, Iâm animating for atmosphere. Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, itâs good practice to respect users whoâve asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when theyâre welcome. @media (prefers-reduced-motion: no-preference) { #quick-draw-shadow { animation: quick-draw-shadow-wobble 6s ease-in-out infinite; } } Itâs a small touch thatâs easy to implement, and it makes your designs more inclusive. Ambient Animation Design Principles If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These arenât hard and fast rules, but rather things Iâve learned while animating smoke, sparkles, eyeballs, and eyebrows. Keep Animations Slow And Smooth Ambient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic BĂ©zier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature. Loop Seamlessly And Avoid Abrupt Changes Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back. Use Layering To Build Complexity A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix â you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene. Avoid Distractions The point of an ambient animation is that it doesnât dominate. Itâs a background element and not a call to action. If someoneâs eyes are drawn to a raised eyebrow, itâs probably too much, so dial back the animation until it feels like something youâd only catch if youâre really looking. Consider Accessibility And Performance Check prefers-reduced-motion, and donât assume everyoneâs device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree. Quick On The Draw Ambient animation is like seasoning on a great dish. Itâs the pinch of salt you barely notice, but youâd miss when itâs gone. It doesnât shout, it whispers. It doesnât lead, it lingers. Itâs floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when itâs done well, ambient animation adds personality to a design without asking for applause. Now, I realise that not everyone needs to animate cartoon characters. So, in part two, Iâll share how I created animations for several recent client projects. Until next time, if youâre crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient. You can view the complete ambient animation code on CodePen.
The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence
With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, interactions feel seamless. When it fails, the entire experience collapses. But trust isnât mystical. It can be understood, measured, and designed for. Here are practical methods and strategies for designing more trustworthy and ethical AI systems.
Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AIâs fallibility. This goes beyond a technical glitch; itâs a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold â the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished. Issues with trusting AI arenât limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. Iâm guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one Iâd requested. With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, weâre on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI? Trust isnât a mystical quality. It is a psychological construct built on predictable factors. I wonât dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems. The Anatomy of Trust: A Psychological Framework for AI To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these âlegsâ for the AI context. 1. Ability (or Competence) This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust. 2. Benevolence This moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if itâs a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trustâthe user starts to believe the AI is not on their side. 3. Integrity Does AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity. 4. Predictability & Reliability Can the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust. The Trust Spectrum: The Goal of a Well-Calibrated Relationship Our goal as UX professionals shouldnât be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust. Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve: Active Distrust The user believes the AI is incompetent or malicious. They will avoid it or actively work against it. Suspicion & Scrutiny The user interacts cautiously, constantly verifying the AIâs outputs. This is a common and often healthy state for users of new AI. Calibrated Trust (The Ideal State) This is the sweet spot. The user has an accurate understanding of the AIâs capabilitiesâits strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical. Over-trust & Automation Bias The user unquestioningly accepts the AIâs outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact. Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust. The Researcherâs Toolkit: How to Measure Trust In AI Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods. Qualitative Probes: Listening For The Language Of Trust During interviews and usability tests, go beyond âWas that easy to use?â and listen for the underlying psychology. Here are some questions you can start using tomorrow: To measure Ability: âTell me about a time this toolâs performance surprised you, either positively or negatively.â To measure Benevolence: âDo you feel this system is on your side? What gives you that impression?â To measure Integrity: âIf this AI made a mistake, how would you expect it to handle it? What would be a fair response?â To measure Predictability: âBefore you clicked that button, what did you expect the AI to do? How closely did it match your expectation?â Investigating Existential Fears (The Job Displacement Scenario) One of the most potent challenges to an AIâs Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique. Imagine a participant says, âWow, it does that part of my job pretty well. I guess I should be worried.â An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores: âThank you for sharing that; itâs a vital perspective, and itâs exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?â This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldnât pretend this fear doesnât exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles. Quantitative Measures: Putting A Number On Confidence You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items: âThe AIâs suggestion was reliable.â (1-7, Strongly Disagree to Strongly Agree) âI am confident in the AIâs output.â (1-7) âI understood why the AI made that recommendation.â (1-7) âThe AI responded in a way that I expected.â (1-7) âThe AI provided consistent responses over time.â (1-7) Over time, these metrics can track how trust is changing as your product evolves. Note: If you want to go beyond these simple questions that Iâve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you arenât looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny. Behavioral Metrics: Observing What Users Do, Not Just What They Say Peopleâs true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into usersâ behavior and the trust they place in your tool. Correction Rate How often do users manually edit, undo, or ignore the AIâs output? A high correction rate is a powerful signal of low trust in its Ability. Verification Behavior Do users switch to Google or open another application to double-check the AIâs work? This indicates they donât trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front. Disengagement Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence. Designing For Trust: From Principles to Pixels Once youâve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows. Designing for Competence and Predictability Set Clear Expectations Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple âIâm still learning about [topic X], so please double-check my answersâ can work wonders. Show Confidence Levels Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says â70% chance of rainâ is more trustworthy than one that just says âIt will rainâ and is wrong. An AI could say, âIâm 85% confident in this summary,â or highlight sentences itâs less sure about. The Role of Explainability (XAI) and Transparency Explainability isnât about showing users the code. Itâs about providing a useful, human-understandable rationale for a decision. Instead of: âHere is your recommendation.â Try: âBecause you frequently read articles about UX research methods, Iâm recommending this new piece on measuring trust in AI.â This addition transforms AI from an opaque oracle to a transparent logical partner. Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task. Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase usersâ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM. Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer Your AI will make mistakes. Trust is not determined by the absence of errors, but by how those errors are handled. Acknowledge Errors Humbly. When the AI is wrong, it should be able to state that clearly. âMy apologies, I misunderstood that request. Could you please rephrase it?â is far better than silence or a nonsensical answer. Provide an Easy Path to Correction. Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A âThank you, Iâm learning from your correctionâ can help rebuild trust after a failure. As long as this is true. Likewise, your AI canât know everything. You should acknowledge this to your users. UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle. This can include the following: Establish User-Centric Metrics: Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like: Hallucination Rate: The frequency with which the AI provides verifiably false information. Successful Fallback Rate: How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative. Prioritize the âI Donât Knowâ Experience: UXers should frame the âI donât knowâ response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience. UX Writing And Trust All of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AIâs voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy. The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users. A few key areas for UX writers to focus on when writing for AI include: Prioritize Transparency Clearly communicate the AIâs capabilities and limitations, especially when itâs still learning or if its responses are generated rather than factual. Use phrases that indicate the AIâs nature, such as âAs an AI, I can...â or âThis is a generated response.â Design for Explainability When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion. Emphasize User Control Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them. The Ethical Tightrope: The Researcherâs Responsibility As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities. The Danger Of âTrustwashingâ We must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing. Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight. Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics. To avoid and prevent trustwashing, researchers and UX teams should: Prioritize genuine transparency. Clearly communicate the limitations, biases, and uncertainties of AI systems. Donât overstate capabilities or obscure potential risks. Conduct rigorous, independent evaluations. Go beyond internal testing and seek external validation of system performance, fairness, and robustness. Engage with diverse stakeholders. Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust. Be accountable for outcomes. Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement. Be accountable for outcomes. Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation. Educate the public. Help users understand how AI works, its limitations, and what to look for when evaluating AI products. Advocate for ethical guidelines and regulations. Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices. Be wary of marketing hype. Critically assess claims made about AI systems, especially those that emphasize âtrustworthinessâ without clear evidence or detailed explanations. Publish negative findings. Donât shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust. Focus on user empowerment. Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs. The Duty To Advocate When our research uncovers deep-seated distrust or potential harm â like the fear of job displacement â our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, Iâve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap. I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge. For example, instead of stating âUsers donât trust our AI because they fear job displacement,â I might frame it as âAddressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.â This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business. Itâs no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10â20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated. Conclusion: Building Our Digital Future On A Foundation Of Trust The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals. Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating âintelligentâ products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved. Table 1: Published Academic Scales Measuring Trust In Automated Systems Survey Tool Name Focus Key Dimensions of Trust Citation Trust in Automation Scale 12-item questionnaire to assess trust between people and automated systems. Measures a general level of trust, including reliability, predictability, and confidence. Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53â71. Trust of Automated Systems Test (TOAST) 9-items used to measure user trust in a variety of automated systems, designed for quick administration. Divided into two main subscales: Understanding (userâs comprehension of the system) and Performance (belief in the systemâs effectiveness). Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735â750. Trust in Automation Questionnaire A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis. Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer. Human Computer Trust Scale 12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology. Divided into two key factors: Benevolence and Competence: This dimension captures the positive attributes of the technology Perceived Risk: This factor measures the userâs subjective assessment of the potential for negative consequences when using a technical artifact. Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology Appendix A: Trust-Building Tactics Checklist To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust: 1. Ability (Competence) & Predictability â Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate the AIâs strengths and weaknesses. â Show Confidence Levels: Display the AIâs uncertainty (e.g., â70% chance,â â85% confidentâ) or highlight less certain parts of its output. â Provide Explainability (XAI): Offer useful, human-understandable rationales for the AIâs decisions or recommendations (e.g., âBecause you frequently read X, Iâm recommending Yâ). â Design for Graceful Error Handling: â Acknowledge errors humbly (e.g., âMy apologies, I misunderstood that request.â). â Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down). â Show that feedback is being used (e.g., âThank you, Iâm learning from your correctionâ). â Design for âI Donât Knowâ Responses: â Acknowledge limitations honestly. â Prioritize a high-quality, helpful fallback experience when the AI cannot answer. â Prioritize Transparency: Clearly communicate the AIâs capabilities and limitations, especially if responses are generated. 2. Benevolence â Address Existential Fears: When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools. â Prioritize User Well-being: Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap. â Emphasize User Control: Provide clear ways for users to give feedback, correct errors, or opt out of AI features. 3. Integrity â Adhere to Ethical Principles: Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty. â Prioritize Genuine Transparency: Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks. â Conduct Rigorous, Independent Evaluations: Seek external validation of system performance, fairness, and robustness to mitigate bias. â Engage Diverse Stakeholders: Involve users, ethics experts, and impacted communities in the design and evaluation processes. â Be Accountable for Outcomes: Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended. â Educate the Public: Help users understand how AI works, its limitations, and how to evaluate AI products. â Advocate for Ethical Guidelines: Support the development and implementation of industry standards and policies that promote responsible AI. â Be Wary of Marketing Hype: Critically assess claims about AI âtrustworthinessâ and demand verifiable data. â Publish Negative Findings: Be transparent about challenges, failures, or ethical dilemmas encountered during research. 4. Predictability & Reliability â Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. â Show Confidence Levels: Instead of just giving an answer, have the AI signal its own uncertainty. â Provide Explainability (XAI) and Transparency: Offer a useful, human-understandable rationale for AI decisions. â Design for Graceful Error Handling: Acknowledge errors humbly and provide easy paths to correction. â Prioritize the âI Donât Knowâ Experience: Frame âI donât knowâ as a feature and design a high-quality fallback experience. â Prioritize Transparency (UX Writing): Clearly communicate the AIâs capabilities and limitations, especially when itâs still learning or if responses are generated. â Design for Explainability (UX Writing): Explain the reasoning behind AI recommendations, decisions, or complex outputs.
How To Minimize The Environmental Impact Of Your Website
As responsible digital professionals, we are becoming increasingly aware of the environmental impact of our work and need to find effective and pragmatic ways to reduce it. James Chudley shares a new decarbonising approach that will help you to minimise the environmental impact of your website, benefiting people, profit, purpose, performance, and the planet.
Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the sunâs heat. The average temperature of the earthâs surface is now 1.2°C warmer than it was in the late 1800âs, and projected to more than double by the end of the century. The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity. The Internet Is A Significant Part Of The Problem Shockingly, the internet is responsible for higher global greenhouse emissions than the aviation industry, and is projected to be responsible for 14% of all global greenhouse gas emissions by 2040. If the internet were a country, it would be the 4th largest polluter in the world and represents the largest coal-powered machine on the planet. But how can something digital like the internet produce harmful emissions? Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity. Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet. Unsurprisingly, internet related emissions are increasing, given that 60% of the worldâs population spend, on average, 40% of their waking hours online. We Must Urgently Reduce The Environmental Impact Of The Internet As responsible digital professionals, we must act quickly to minimise the environmental impact of our work. It is encouraging to see the UK government encourage action by adding âMinimise environmental impactâ to their best practice design principles, but there is still too much talking and not enough corrective action taking place within our industry. The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda. So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to? A eureka moment on a recent web optimisation project gave me an idea. My Eureka Moment I led a project to optimise the mobile performance of www.talktofrank.com, a government drug advice website that aims to keep everyone safe from harm. Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need. Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as Google Lighthouse that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%. We hadnât set out to reduce the carbon footprint, but seeing these results led to my eureka moment. I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet â so everyone wins. This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the triple bottom line of people, planet and profit to include performance and purpose. So why is âminimisingâ such a great digital sustainability strategy? Profit Website providers win because their website becomes more efficient and more likely to meet its intended outcomes, and a lighter site should also lead to lower hosting bills. People People win because they get to use a website that downloads faster, is quick and easy to use because it's been intentionally designed to be as simple as possible, enabling them to complete their tasks with the minimum amount of effort and mental energy. Performance Lightweight webpages download faster so perform better for users, particularly those on older devices and on slower network connections. Planet The planet wins because the amount of energy (and associated emissions) that is required to deliver the website is reduced. Purpose We know that we do our best work when we feel a sense of purpose. It is hugely gratifying as a digital professional to know that our work is doing good in the world and contributing to making things better for people and the environment. In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible. So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one). Doing the right thing isnât always easy, but weâve done it before when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects. Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so donât worry about it all being another new thing to learn about! So this all makes sense in theory, but whatâs the master plan to use when putting it into practice? The Masterplan The masterplan for creating websites that have minimal environmental impact is to focus on offering the maximum value from the minimum input of energy. Itâs an adaptation of Buckminister Fullerâs âDymaxionâ principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources. Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them. You can achieve this by minimising websites to their core content, features, and functionality, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isnât adding a proportional amount of value to the amount of energy it requires to provide it should be removed. So thatâs the masterplan, but how do you put it into practice? Decarbonise Your Highest Value User Journeys Iâve developed a new approach called âDecarbonising User Journeysâ that will help you to minimise the environmental impact of your website and maximise its performance. Note: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started. The secret here is to start small, demonstrate improvements, and then scale. The approach consists of five simple steps: Identify your highest value user journey, Benchmark your user journey, Set targets, Decarbonise your user journey, Track and share your progress. Hereâs how it works. Step 1: Identify Your Highest Value User Journey Your highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation. You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved. You may have lots of important user journeys, and itâs fine to decarbonise multiple journeys in parallel if you have the resources, but Iâd recommend starting with one first to keep things simple. To bring this to life, letâs consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income. Step 2: Benchmark Your User Journey Once youâve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint. It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need. You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met. You can benchmark the carbon footprint and performance of your user journey using online tools such as Cardamon, Ecograder, Website Carbon Calculator, Google Lighthouse, and Bioscore. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint. To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. Google Lighthouse works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating. A great way to bring your benchmarking scores to life is to visualise them in a similar way to how you would present a customer journey map or service blueprint. This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate. Iâve found that adding the energy efficiency ratings is really effective because itâs an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people. Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page. Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view. In our football user journey example, itâs clear that the âNewsâ and âTicketsâ pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising. Step 3: Set Targets Use your benchmarking results to help you set targets to aim for, such as a carbon budget, energy efficiency, maximum page weight, and minimum Google Lighthouse performance targets for each individual page, in addition to your existing UX metrics and business KPIs. There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and youâll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys. Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it. Step 4: Decarbonise Your User Journey Your objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible. Itâs up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey. Hereâs a list of activities that will all help to reduce the carbon footprint of your user journey: Work through the recommendations in the âdiagnosticsâ section of your Google Lighthouse report to help optimise page performance. Switch to a green hosting provider if you are not already using one. Use the Green Web Directory to help you choose one. Work through the W3C Web Sustainability Guidelines, implementing the most relevant guidelines to your specific user journey. Remove anything that is not adding any user or business value. Reduce the amount of information on your webpages to make them easier to read and less overwhelming for people. Replace content with a lighter-weight alternative (such as swapping a video for text) if the lighter-weight alternative provides the same value. Optimise assets such as photos, videos, and code to reduce file sizes. Remove any barriers to accessing your website and any distractions that are getting in the way. Re-use familiar components and design patterns to make your websites quicker and easier to use. Write simply and clearly in plain English to help people get the most value from your website and to help them avoid making mistakes that waste time and energy to resolve. Fix any usability issues you identified during your benchmarking to ensure that your website is as easy to use and useful as possible. Ensure your user journey is as accessible as possible so the widest possible audience can benefit from using it, offsetting the environmental cost of providing the website. Step 5: Track And Share Your Progress As you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives. All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint. Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys. You should also start to communicate your progress with your users. Itâs important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use. Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly. Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible. You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a digital sustainability statement, such as Unileverâs, as shown below. A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further. As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers. Useful Decarbonising Principles Keep these principles in mind to help you decarbonise your user journeys: More doing and less talking. Start decarbonising your user journeys as soon as possible to accelerate your learning and positive change. Start small. Starting small by decarbonising an individual journey makes it easier to get started and generates results to demonstrate value faster. Aim to do more with less. Minimise what you offer to ensure you are providing the maximum amount of value for the energy you are consuming. Make your website as useful and as easy to use as possible. Useful websites can justify the energy they consume to provide them, ensuring they are net positive in terms of doing more good than harm. Focus on progress over perfection. Websites are never finished or perfect but they can always be improved, every small improvement you make will make a difference. Start Decarbonising Your User Journeys Today Decarbonising user journeys shouldnât be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a continual basis as part of your broader digital sustainability strategy. We know that websites are never finished and that the best websites continually improve as both user and business needs change. Iâd like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites. Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to âminimiseâ the things they are working on. This avoids building âcarbon debtâ that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place. By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority. Youâll have noticed that, other than using website carbon calculator tools, this approach doesnât require any skills that donât already exist within typical digital teams today. This is great because it means youâve already got the skills that you need to do this important work. I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet. Good luck!
SerpApi: A Complete API For Fetching Search Engine Data
From competitive SEO research and monitoring prices to training AI and parsing local geographic data, real-time search results power smarter apps. Tools like SerpApi make it easy to pull, customize, and integrate this data directly into your app or website.
This article is a sponsored by SerpApi SerpApi leverages the power of search engine giants, like Google, DuckDuckGo, Baidu, and more, to put together the most pertinent and accurate search result data for your users from the comfort of your app or website. Itâs customizable, adaptable, and offers an easy integration into any project. What do you want to put together? Search information on a brand or business for SEO purposes; Input data to train AI models, such as the Large Language Model, for a customer service chatbot; Top news and websites to pick from for a subscriber newsletter; Google Flights API: collect flight information for your travel app; Price comparisons for the same product across different platforms; Extra definitions and examples for words that can be offered along a language learning app. The list goes on. In other words, you get to leverage the most comprehensive source of data on the internet for any number of needs, from competitive SEO research and tracking news to parsing local geographic data and even completing personal background checks for employment. Start With A Simple GET Request The results from the search API are only a URL request away for those who want a super quick start. Just add your search details in the URL parameters. Say you need the search result for âStone Hengeâ from the location âWestminster, England, United Kingdomâ in language âen-GBâ, and country of search origin âukâ from the domain âgoogle.co.ukâ. Hereâs how simple it is to put the GET request together: https://serpapi.com/search.json?q=Stone+Henge&location=Westminster,+England,+United+Kingdom&hl=en-GB&gl=uk&google_domain=google.co.uk&api_key=your_api_key Then thereâs the impressive list of libraries that seamlessly integrate the APIs into mainstream programming languages and frameworks such as JavaScript, Ruby, .NET, and more. Give It A Quick Try Want to give it a spin? Sign up and start for free, or tinker with the SerpApiâs live playground without signing up. The playground allows you to choose which search engine to target, and you can fill in the values for all the basic parameters available in the chosen API to customize your search. On clicking âSearchâ, you get the search result page and its extracted JSON data. If you need to get a feel for the full API first, you can explore their easy-to-grasp web documentation before making any decision. You have the chance to work with all of the APIs to your satisfaction before committing to it, and when that time comes, SerpApiâs multiple price plans tackle anywhere between an economic few hundred searches a month and bulk queries fit for large corporations. What Data Do You Need? Beyond the rudimentary search scraping, SerpApi provides a range of configurations, features, and additional APIs worth considering. Geolocation Capture the global trends, or refine down to more localized particulars by names of locations or Googleâs place identifiers. SerpApiâs optimized routing of requests ensures accurate retrieval of search result data from any location worldwide. If locations themselves are the answers to your queries â say, a cycle trail to be suggested in a fitness app â those can be extracted and presented as maps using SerpApiâs Google Maps API. Structured JSON Although search engines reveal results in a tidy user interface, deriving data into your application could cause you to end up with a large data dump to be sifted through â but not if youâre using SerpApi. SerpApi pulls data in a well-structured JSON format, even for the popular kinds of enriched search results, such as knowledge graphs, review snippets, sports league stats, ratings, product listings, AI overview, and more. Speedy Results SerpApiâs baseline performance can take care of timely search data for real-time requirements. But what if you need more? SerpApiâs Ludicrous Speed option, easily enabled from the dashboard with an upgrade, provides a super-fast response time. More than twice as fast as usual, thanks to twice the server power. Thereâs also Ludicrous Speed Max, which allocates four times more server resources for your data retrieval. Data that is time-sensitive and for monitoring things in real-time, such as sports scores and tracking product prices, will lose its value if it is not handled in a timely manner. Ludicrous Speed Max guarantees no delays, even for a large-scale enterprise haul. You can also use a relevant SerpApi API to hone in on your relevant category, like Google Flights API, Amazon API, Google News API, etc., to get fresh and apt results. If you donât need the full depth of the search API, thereâs a Light version available for Google Search, Google Images, Google Videos, Google News, and DuckDuckGo Search APIs. Search Controls & Privacy Need the results asynchronously picked up? Want a refined output using advanced search API parameters and a JSON Restrictor? Looking for search outcomes for specific devices? Donât want auto-corrected query results? Thereâs no shortage of ways to configure SerpApi to get exactly what you need. Additionally, if you prefer not to have your search metadata on their servers, simply turn on the âZeroTraceâ mode thatâs available for selected plans. The X-Ray Save yourself a headache, literally, trying to play match between what you see on a search result page and its extracted data in JSON. SerpApiâs X-Ray tool shows you where what comes from. Itâs available and free in all plans. Inclusive Support If you donât have the expertise or resources for tackling the validity of scraping search results, hereâs what SerpApi says: âSerpApi, LLC assumes scraping and parsing liabilities for both domestic and foreign companies unless your usage is otherwise illegalâ. You can reach out and have a conversation with them regarding the legal protections they offer, as well as inquire about anything else you might want to know about, including SerpApi in your project, such as pricing, performance expected, on-demand options, and technical support. Just drop a message at their contact page. In other words, the SerpApi team has your back with the support and expertise to get the most from your fetched data. Try SerpApi Free Thatâs right, you can get your hands on SerpApi today and start fetching data with absolutely no commitment, thanks to a free starter plan that gives you up to 250 free search queries. Give it a try and then bump up to one of the reasonably-priced monthly subscription plans with generous search limits. Try SerpApi
Functional Personas With AI: A Lean, Practical Workflow
For too long, personas have been created with considerable effort, only to offer limited value. Paul Boag shows how to breathe new life into this stale UX asset and demonstrates that itâs possible to create truly useful functional personas in a lightweight way.
Traditional personas suck for UX work. They obsess over marketing metrics like age, income, and job titles while missing what actually matters in design: what people are trying to accomplish. Functional personas, on the other hand, focus on what people are trying to do, not who they are on paper. With a simple AIâassisted workflow, you can build and maintain personas that actually guide design, content, and conversion decisions. Keep users front of mind with taskâdriven personas, Skip fragile demographics; center on goals, questions, and blockers, Use AI to process your messy inputs fast and fill research gaps, Validate lightly, ship confidently, and keep them updated. In this article, I want to breathe new life into a stale UX asset. For too long, personas have been something that many of us just created, despite the considerable work that goes into them, only to find they have limited usefulness. I know that many of you may have given up on them entirely, but I am hoping in this post to encourage you that it is possible to create truly useful personas in a lightweight way. Why Personas Still Matter Personas give you a shared lens. When everyone uses the same reference point, you cut debate and make better calls. For UX designers, developers, and digital teams, that shared lens keeps you from designing in silos and helps you prioritize work that genuinely improves the experience. I use personas as a quick test: Would this change help this user complete their task faster, with fewer doubts? If the answer is no (or a shrug), itâs probably a sign the idea isnât worth pursuing. From Demographics To Function Traditional personas tell you someoneâs age, job title, or favorite brand. That makes a nice poster, but it rarely changes design or copy. Functional personas flip the script. They describe: Goals & tasks: What the person is here to achieve. Questions & objections: What they need to know before they act. Touchpoints: How the person interacts with the organization. Service gaps: How the company might be letting this persona down. When you center on tasks and friction, you get direct lines from user needs to UI decisions, content, and conversion paths. But remember, this list isnât set in stone â adapt it to whatâs actually useful in your specific situation. One of the biggest problems with traditional personas was following a rigid template regardless of whether it made sense for your project. We must not fall into that same mistake with functional personas. The Benefits of Functional Personas For small startups, functional personas reduce wasted effort. For enterprise teams, they keep sprawling projects grounded in what matters most. However, because of the way we are going to produce our personas, they provide certain benefits in either case: Lighten the load: Theyâre easier to update without large research cycles. Stay current: Because they are easy to produce, we can update them more often. Tie to outcomes: Tasks, objections, and proof points map straight to funnels, flows, and product decisions. We can deliver these benefits because we are going to use AI to help us, rather than carrying out a lot of time-consuming new research. How AI Helps Us Get There Of course, doing fresh research is always preferable. But in many cases, it is not feasible due to time or budget constraints. I would argue that using AI to help us create personas based on existing assets is preferable to having no focus on user attention at all. AI tools can chew through the inputs you already have (surveys, analytics, chat logs, reviews) and surface patterns you can act on. They also help you scan public conversations around your product category to fill gaps. I therefore recommend using AI to: Synthesize inputs: Turn scattered notes into clean themes. Spot segments by need: Group people by jobsâtoâbeâdone, not demographics. Draft quickly: Produce firstâpass personas and sample journeys in minutes. Iterate with stakeholders: Update on the fly as you get feedback. AI doesnât remove the need for traditional research. Rather, it is a way of extracting more value from the scattered insights into users that already exist within an organization or online. The Workflow Hereâs how to move from scattered inputs to usable personas. Each step builds on the last, so treat it as a cycle you can repeat as projects evolve. 1. Set Up A Dedicated Workspace Create a dedicated space within your AI tool for this work. Most AI platforms offer project management features that let you organize files and conversations: In ChatGPT and Claude, use âProjectsâ to store context and instructions. In Perplexity, Gemini and CoPilot similar functionality is referred to as âSpaces.â This project space becomes your central repository where all uploaded documents, research data, and generated personas live together. The AI will maintain context between sessions, so you wonât have to re-upload materials each time you iterate. This structured approach makes your workflow more efficient and helps the AI deliver more consistent results. 2. Write Clear Instructions Next, you can brief your AI project so that it understands what it wants from you. For example: âAct as a user researcher. Create realistic, functional personas using the project files and public research. Segment by needs, tasks, questions, pain points, and goals. Show your reasoning.â Asking for a rationale gives you a paper trail you can defend to stakeholders. 3. Upload What Youâve Got (Even If Itâs Messy) This is where things get really powerful. Upload everything (and I mean everything) you can put your hands on relating to the user. Old surveys, past personas, analytics screenshots, FAQs, support tickets, review snippets; dump them all in. The more varied the sources, the stronger the triangulation. 4. Run Focused External Research Once you have done that, you can supplement that data by getting AI to carry out âdeep researchâ about your brand. Have AI scan recent (I often focus on the last year) public conversations for your brand, product space, or competitors. Look for: Whoâs talking and what theyâre trying to do; Common questions and blockers; Phrases people use (great for copywriting). Save the report you get back into your project. 5. Propose Segments By Need Once you have done that, ask AI to suggest segments based on tasks and friction points (not demographics). Push back until each segment is distinct, observable, and actionable. If two would behave the same way in your flow, merge them. This takes a little bit of trial and error and is where your experience really comes into play. 6. Generate Draft Personas Now you have your segments, the next step is to draft your personas. Use a simple template so the document is read and used. If your personas become too complicated, people will not read them. Each persona should: State goals and tasks, List objections and blockers, Highlight pain points, Show touchpoints, Identify service gaps. Below is a sample template you can work with: # Persona Title: e.g. Savvy Shopper - Person's Name: e.g. John Smith. - Age: e.g. 24 - Job: e.g. Social Media Manager "A quote that sums up the persona's general attitude" ## Primary Goal What theyâre here to achieve (1â2 lines). ## Key Tasks âą Task 1 âą Task 2 âą Task 3 ## Questions & Objections âą What do they need to know before they act? âą What might make them hesitate? ## Pain Points âą Where do they get stuck? âą What feels risky, slow, or confusing? ## Touchpoints âą What channels are they most commonly interacting with? ## Service Gaps âą How is the organization currently failing this persona? Remember, you should customize this to reflect what will prove useful within your organization. 7. Validate It is important to validate that what the AI has produced is realistic. Obviously, no persona is a true representation as it is a snapshot in time of a Hypothetical user. However, we do want it to be as accurate as possible. Share your drafts with colleagues who interact regularly with real users â people in support cells or research teams. Where possible, test with a handful of users. Then cut anything that you canât defend or correct any errors that are identified. Troubleshooting & Guardrails As you work through the above process, you will encounter problems. Here are common pitfalls and how to avoid them: Too many personas? Merge until each one changes a design or copy decision. Three strong personas beat seven weak ones. Stakeholder wants demographics? Only include details that affect behavior. Otherwise, leave them out. Suggest separate personas for other functions (such as marketing). AI hallucinations? Always ask for a rationale or sources. Crossâcheck with your own data and customerâfacing teams. Not enough data? Mark assumptions clearly, then validate with quick interviews, surveys, or usability tests. Making Personas Useful In Practice The most important thing to remember is to actually use your personas once theyâve been created. They can easily become forgotten PDFs rather than active tools. Instead, personas should shape your work and be referenced regularly. Here are some ways you can put personas to work: Navigation & IA: Structure menus by top tasks. Content & Proof: Map objections to FAQs, case studies, and microcopy. Flows & UI: Streamline steps to match how people think. Conversion: Match CTAs to personasâ readiness, goals, and pain points. Measurement: Track KPIs that map to personas, not vanity metrics. With this approach, personas evolve from static deliverables into dynamic reference points your whole team can rely on. Keep Them Alive Treat personas as a living toolkit. Schedule a refresh every quarter or after major product changes. Rerun the research pass, regenerate summaries, and archive outdated assumptions. The goal isnât perfection; itâs keeping them relevant enough to guide decisions. Bottom Line Functional personas are faster to build, easier to maintain, and better aligned with real user behavior. By combining AIâs speed with human judgment, you can create personas that donât just sit in a slide deck; they actively shape better products, clearer interfaces, and smoother experiences.
Creating Elastic And Bounce Effects With Expressive Animator
Elastic and bounce effects have long been among the most desirable but time-consuming techniques in motion design. Expressive Animator streamlines the process, making it possible to produce lively animations in seconds, bypassing the tedious work of manual keyframe editing.
This article is a sponsored by Expressive In the world of modern web design, SVG images are used everywhere, from illustrations to icons to background effects, and are universally prized for their crispness and lightweight size. While static SVG images play an important role in web design, most of the time their true potential is unlocked only when they are combined with motion. Few things add more life and personality to a website than a well-executed SVG animation. But not all animations have the same impact in terms of digital experience. For example, elastic and bounce effects have a unique appeal in motion design because they bring a sense of realism into movement, making animations more engaging and memorable. (Large preview) However, anyone who has dived into animating SVGs knows the technical hurdles involved. Creating a convincing elastic or bounce effect traditionally requires handling complex CSS keyframes or wrestling with JavaScript animation libraries. Even when using an SVG animation editor, it will most likely require you to manually add the keyframes and adjust the easing functions between them, which can become a time-consuming process of trial and error, no matter the level of experience you have. This is where Expressive Animator shines. It allows creators to apply elastic and bounce effects in seconds, bypassing the tedious work of manual keyframe editing. And the result is always exceptional: animations that feel alive, produced with a fraction of the effort. Using Expressive Animator To Create An Elastic Effect Creating an elastic effect in Expressive Animator is remarkably simple, fast, and intuitive, since the effect is built right into the software as an easing function. This means you only need two keyframes (start and end) to make the effect, and the software will automatically handle the springy motion in between. Even better, the elastic easing can be applied to any animatable property (e.g., position, scale, rotation, opacity, morph, etc.), giving you a consistent way to add it to your animations. Before we dive into the tutorial, take a look at the video below to see what you will learn to create and the entire process from start to finish. Once you hit the âCreate projectâ button, you can use the Pen and Ellipse tools to create the artwork that will be animated, or you can simply copy and paste the artwork below. Press the A key on your keyboard to switch to the Node tool, then select the String object and move its handle to the center-right point of the artboard. Donât worry about precision, as the snapping will do all the heavy lifting for you. This will bend the shape and add keyframes for the Morph animator. Next, press the V key on your keyboard to switch to the Selection tool. With this tool enabled, select the Ball, move it to the right, and place it in the middle of the string. Once again, snapping will do all the hard work, allowing you to position the ball exactly where you want to, while auto-recording automatically adds the appropriate keyframes. You can now replay the animation and disable auto-recording by clicking on the Auto-Record button again. As you can see when replaying, the direction in which the String and Ball objects are moving is wrong. Fortunately, we can fix this extremely easily just by reversing the keyframes. To do this, select the keyframes in the timeline and right-click to open the context menu and choose Reverse. This will reverse the keyframes, and if you replay the animation, you will see that the direction is now correct. With this out of the way, we can finally add the elastic effect. Select all the keyframes in the timeline and click on the Custom easing button to open a dialog with easing options. From the dialog, choose Elastic and set the oscillations to 4 and the stiffness to 2.5. Thatâs it! Click anywhere outside the easing dialog to close it and replay the animation to see the result. The animation can be exported as well. Press Cmd/Ctrl + E on your keyboard to open the export dialog and choose from various export options, ranging from vectorized formats, such as SVG and Lottie, to rasterized formats, such as GIF and video. For this specific animation, weâre going to choose the SVG export format. Expressive Animator allows you to choose between three different types of SVG, depending on the technology used for animation: SMIL, CSS, or JavaScript. Each of these technologies has different strengths and weaknesses, but for this tutorial, we are going to choose SMIL. This is because SMIL-based animations are widely supported, even on Safari browsers, and can be used as background images or embedded in HTML pages using the <img> tag. In fact, Andy Clarke recently wrote all about SMIL animations here at Smashing Magazine if you want a full explanation of how it works. You can visualize the exported SVG in the following CodePen demo: Conclusion Elastic and bounce effects have long been among the most desirable but time-consuming techniques in motion design. By integrating them directly into its easing functions, Expressive Animator removes the complexity of manual keyframe manipulation and transforms what used to be a technical challenge into a creative opportunity. The best part is that getting started with Expressive Animator comes with zero risk. The software offers a full 7âday free trial without requiring an account, so you can download it instantly and begin experimenting with your own designs right away. After the trial ends, you can buy Expressive Animator with a one-time payment, no subscription required. This will give you a perpetual license covering both Windows and macOS. To help you get started even faster, Iâve prepared some extra resources for you. Youâll find the source files for the animations created in this tutorial, along with a curated list of useful links that will guide you further in exploring Expressive Animator and SVG animation. These materials are meant to give you a solid starting point so you can learn, experiment, and build on your own with confidence. Grumpy Egg: The .eaf source file for the sample animation presented at the beginning of this article. Elastic Effect: Another .eaf file, this time for the animation we made in this tutorial. Get started with Expressive Animator Expressive Animator Documentation
From Data To Decisions: UX Strategies For Real-Time Dashboards
Real-time dashboards are decision assistants, not passive displays. In environments like fleet management, healthcare, and operations, the cost of a delay or misstep is high. Karan Rawal explores strategic UX patterns that shorten time-to-decision, reduce cognitive overload, and make live systems trustworthy.
I once worked with a fleet operations team that monitored dozens of vehicles in multiple cities. Their dashboard showed fuel consumption, live GPS locations, and real-time driver updates. Yet the team struggled to see what needed urgent attention. The problem was not a lack of data but a lack of clear indicators to support decision-making. There were no priorities, alerts, or context to highlight what mattered most at any moment. Real-time dashboards are now critical decision-making tools in industries like logistics, manufacturing, finance, and healthcare. However, many of them fail to help users make timely and confident decisions, even when they show live data. Designing for real-time use is very different from designing static dashboards. The challenge is not only presenting metrics but enabling decisions under pressure. Real-time users face limited time and a high cognitive load. They need clarity on actions, not just access to raw data. This requires interface elements that support quick scanning, pattern recognition, and guided attention. Layout hierarchy, alert colors, grouping, and motion cues all help, but they must be driven by a deeper strategy: understanding what the user must decide in that moment. This article explores practical UX strategies for real-time dashboards that enable real decisions. Instead of focusing only on visual best practices, it looks at how user intent, personalization, and cognitive flow can turn raw data into meaningful, timely insights. Designing for Real-Time Comprehension: Helping Users Stay Focused Under Pressure A GPS app not only shows users their location but also helps them decide where to go next. In the same way, a real-time dashboard should go beyond displaying the latest data. Its purpose is to help users quickly understand complex information and make informed decisions, especially in fast-paced environments with short attention spans. How Users Process Real-Time Updates Humans have limited cognitive capacity, so they can only process a small amount of data at once. Without proper context or visual cues, rapidly updating dashboards can overwhelm users and shift attention away from key information. To address this, I use the following approaches: Delta Indicators and Trend Sparklines Delta indicators show value changes at a glance, while sparklines are small line charts that reveal trends over time in a compact space. For example, a sales dashboard might show a green upward arrow next to revenue to indicate growth, along with a sparkline displaying sales trends over the past week. Subtle Micro-Animations Small animations highlight changes without distracting users. Research in cognitive psychology shows that such animations effectively draw attention, helping users notice updates while staying focused. For instance, a soft pulse around a changing metric can signal activity without overwhelming the viewer. Mini-History Views Showing a short history of recent changes reduces reliance on memory. For example, a dashboard might let users scroll back a few minutes to review updates, supporting better understanding and verification of data trends. Common Challenges In Real-Time Dashboards Many live dashboards fail when treated as static reports instead of dynamic tools for quick decision-making. In my early projects, I made this mistake, resulting in cluttered layouts, distractions, and frustrated users. Typical errors include the following: Overcrowded Interfaces: Presenting too many metrics competes for usersâ attention, making it hard to focus. Flat Visual Hierarchy: Without clear emphasis on critical data, users might focus on less important information. No Record of Changes: When numbers update instantly with no explanation, users can feel lost or confused. Excessive Refresh Rates: Not all data needs constant updates. Updating too frequently can create unnecessary motion and cognitive strain. Managing Stress And Cognitive Overload Under stress, users depend on intuition and focus only on immediately relevant information. If a dashboard updates too quickly or shows conflicting alerts, users may delay actions or make mistakes. It is important to: Prioritize the most important data first to avoid overwhelming the user. Offer snapshot or pause options so users can take time to process information. Use clear indicators to show if an action is required or if everything is operating normally. In real-time environments, the best dashboards balance speed with calmness and clarity. They are not just data displays but tools that promote live thinking and better decisions. Enabling Personalization For Effective Data Consumption Many analytics tools let users build custom dashboards, but these design principles guide layouts that support decision-making. Personalization options such as custom metric selection, alert preferences, and update pacing help manage cognitive load and improve data interpretation. Cognitive Challenge UX Risk in Real-Time Dashboards Design Strategy to Mitigate Users canât track rapid changes Confusion, missed updates, second-guessing Use delta indicators, change animations, and trend sparklines Limited working memory Overload from too many metrics at once Prioritize key KPIs, apply progressive disclosure Visual clutter under stress Tunnel vision or misprioritized focus Apply a clear visual hierarchy, minimize non-critical elements Unclear triggers or alerts Decision delays, incorrect responses Use thresholds, binary status indicators, and plain language Lack of context/history Misinterpretation of sudden shifts Provide micro-history, snapshot freeze, or hover reveal Common Cognitive Challenges in Real-Time Dashboards and UX Strategies to Overcome Them. Designing For Focus: Using Layout, Color, And Animation To Drive Real-Time Decisions Layout, color, and animation do more than improve appearance. They help users interpret live data quickly and make decisions under time pressure. Since users respond to rapidly changing information, these elements must reduce cognitive load and highlight key insights immediately. Creating a Visual Hierarchy to Guide Attention. A clear hierarchy directs usersâ eyes to key metrics. Arrange elements so the most important data stands out. For example, place critical figures like sales volume or system health in the upper left corner to match common scanning patterns. Limit visible elements to about five to prevent overload and ease processingâgroup related data into cards to improve scannability and help users focus without distraction. Using Color Purposefully to Convey Meaning. Color communicates meaning in data visualization. Red or orange indicates critical alerts or negative trends, signaling urgency. Blue and green represent positive or stable states, offering reassurance. Neutral tones like gray support background data and make key colors stand out. Ensure accessibility with strong contrast and pair colors with icons or labels. For example, bright red can highlight outages while muted gray marks historical logs, keeping attention on urgent issues. Supporting Comprehension with Subtle Animation. Animation should clarify, not distract. Smooth transitions of 200 to 400 milliseconds communicate changes effectively. For instance, upward motion in a line chart reinforces growth. Hover effects and quick animations provide feedback and improve interaction. Thoughtful motion makes changes noticeable while maintaining focus. Layout, color, and animation create an experience that enables fast, accurate interpretation of live data. Real-time dashboards support continuous monitoring and decision-making by reducing mental effort and highlighting anomalies or trends. Personalization allows users to tailor dashboards to their roles, improving relevance and efficiency. For example, operations managers may focus on system health metrics while sales directors prioritize revenue KPIs. This adaptability makes dashboards dynamic, strategic tools. Element Placement & Visual Weight Purpose & Suggested Colors Animation Use Case & Effect Primary KPIs Center or top-left; bold, large font Highlight critical metrics; typically stable states Value updates: smooth increase (200â400 ms) Controls Top or left panel; light, minimal visual weight Provide navigation/filtering; neutral color schemes User actions: subtle feedback (100â150 ms) Charts Middle or right; medium emphasis Show trends and comparisons; use blue/green for positives, grey for neutral Chart trends: trail or fade (300â600 ms) Alerts Edge of dashboard or floating; high contrast (bold) Signal critical issues; red/orange for alerts, yellow/amber for warnings Quick animations for appearance; highlight changes Design Elements, Placement, Color, and Motion Strategies for Effective Real-Time Dashboards. Clarity In Motion: Designing Dashboards That Make Change Understandable If users cannot interpret changes quickly, the dashboard fails regardless of its visual design. Over time, I have developed methods that reduce confusion and make change feel intuitive rather than overwhelming. One of the most effective tools I use is the sparkline, a compact line chart that shows a trend over time and is typically placed next to a key performance indicator. Unlike full charts, sparklines omit axes and labels. Their simplicity makes them powerful, since they instantly show whether a metric is trending up, down, or steady. For example, placing a sparkline next to monthly revenue immediately reveals if performance is improving or declining, even before the viewer interprets the number. When using sparklines effectively, follow these principles: Pair sparklines with metrics such as revenue, churn rate, or user activity so users can see both the value and its trajectory at a glance. Simplify by removing clutter like axis lines or legends unless they add real value. Highlight the latest data point with a dot or accent color since current performance often matters more than historical context. Limit the time span. Too many data points compress the sparkline and hurt readability. A focused window, such as the last 7 or 30 days, keeps the trend clear. Use sparklines in comparative tables. When placed in rows (for example, across product lines or regions), they reveal anomalies or emerging patterns that static numbers may hide. Interactive P&L Performance Dashboard with Forecast and Variance Tracking. (Large preview) I combine sparklines with directional indicators like arrows and percentage deltas to support quick interpretation. For example, pairing ââČ +3.2%â with a rising sparkline shows both the direction and scale of change. I do not rely only on color to convey meaning. Since 1 in 12 men is color-blind, using red and green alone can exclude some users. To ensure accessibility, I add shapes and icons alongside color cues. Micro-animations provide subtle but effective signals. This counters change blindness â our tendency to miss non-salient changes. When numbers update, I use fade-ins or count-up transitions to indicate change without distraction. If a list reorders, such as when top-performing teams shift positions, a smooth slide animation under 300 milliseconds helps users maintain spatial memory. These animations reduce cognitive friction and prevent disorientation. Layout is critical for clarifying change: I use modular cards with consistent spacing, alignment, and hierarchy to highlight key metrics. Cards are arranged in a sortable grid, allowing filtering by severity, recency, or relevance. Collapsible sections manage dense information while keeping important data visible for quick scanning and deeper exploration. For instance, in a logistics dashboard, a card labeled âOn-Time Deliveriesâ may display a weekly sparkline. If performance dips, the line flattens or turns slightly red, a downward arrow appears with a â1.8% delta, and the updated number fades in. This gives instant clarity without requiring users to open a detailed chart. All these design choices support fast, informed decision-making. In high-velocity environments like product analytics, logistics, or financial operations, dashboards must do more than present data. They must reduce ambiguity and help teams quickly detect change, understand its impact, and take action. Making Reliability Visible: Designing for Trust In Real-Time Data Interfaces In real-time data environments, reliability is not just a technical feature. It is the foundation of user trust. Dashboards are used in high-stakes, fast-moving contexts where decisions depend on timely, accurate data. Yet these systems often face less-than-ideal conditions such as unreliable networks, API delays, and incomplete datasets. Designing for these realities is not just damage control. It is essential for making data experiences usable and trustworthy. When data lags or fails to load, it can mislead users in serious ways: A dip in a trendline may look like a market decline when it is only a delay in the stream. Missing categories in a bar chart, if not clearly signaled, can lead to flawed decisions. To mitigate this: Every data point should be paired with its condition. Interfaces must show not only what the data says but also how current or complete it is. One effective strategy is replacing traditional spinners with skeleton UIs. These are greyed-out, animated placeholders that suggest the structure of incoming data. They set expectations, reduce anxiety, and show that the system is actively working. For example, in a financial dashboard, users might see the outline of a candlestick chart filling in as new prices arrive. This signals that data is being refreshed, not stalled. Handling Data Unavailability When data is unavailable, I show cached snapshots from the most recent successful load, labeled with timestamps such as âData as of 10:42 AM.â This keeps users aware of what they are viewing. In operational dashboards such as logistics or monitoring systems, this approach lets users act confidently even when real-time updates are temporarily out of sync. Managing Connectivity Failures To handle connectivity failures, I use auto-retry mechanisms with exponential backoff, giving the system several chances to recover quietly before notifying the user. If retries fail, I maintain transparency with clear banners such as âOffline⊠ReconnectingâŠâ In one product, this approach prevented users from reloading entire dashboards unnecessarily, especially in areas with unreliable Wi-Fi. Ensuring Reliability with Accessibility Reliability strongly connects with accessibility: Real-time interfaces must announce updates without disrupting user focus, beyond just screen reader compatibility. ARIA live regions quietly narrate significant changes in the background, giving screen reader users timely updates without confusion. All controls remain keyboard-accessible. Animations follow motion-reduction preferences to support users with vestibular sensitivities. Data Freshness Indicator A compact but powerful pattern I often implement is the Data Freshness Indicator, a small widget that: Shows sync status, Displays the last updated time, Includes a manual refresh button. This improves transparency and reinforces user control. Since different users interpret these cues differently, advanced systems allow personalization. For example: Analysts may prefer detailed logs of update attempts. Business users might see a simple status such as âLiveâ, âStaleâ, or âPausedâ. Reliability in data visualization is not about promising perfection. It is about creating a resilient, informative experience that supports human judgment by revealing the true state of the system. When users understand what the dashboard knows, what it does not, and what actions it is taking, they are more likely to trust the data and make smarter decisions. Real-World Case Study In my work across logistics, hospitality, and healthcare, the challenge has always been to distill complexity into clarity. A well-designed dashboard is more than functional; it serves as a trusted companion in decision-making, embedding clarity, speed, and confidence from the start. 1. Fleet Management Dashboard A client in the car rental industry struggled with fragmented operational data. Critical details like vehicle locations, fuel usage, maintenance schedules, and downtime alerts were scattered across static reports, spreadsheets, and disconnected systems. Fleet operators had to manually cross-reference data sources, even for basic dispatch tasks, which caused missed warnings, inefficient routing, and delays in response. We solved these issues by redesigning the dashboard strategically, focusing on both layout improvements and how users interpret and act on information. Strategic Design Improvements and Outcomes: Instant visibility of KPIs High-contrast cards at the top of the dashboard made key performance indicators instantly visible. Example: Fuel consumption anomalies that previously went unnoticed for days were flagged within hours, enabling quick corrective action. Clear trend and pattern visualization Booking forecasts, utilization graphs, and city-by-city comparisons highlighted performance trends. Example: A weekday-weekend booking chart helped a regional manager spot underperformance in one city and plan targeted vehicle redistribution. Unified operational snapshot Cost, downtime, and service schedules were grouped into one view. Result: The operations team could assess fleet health in under five minutes each morning instead of using multiple tools. Predictive context for planning Visual cues showed peak usage periods and historical demand curves. Result: Dispatchers prepared for forecasted spikes, reducing customer wait times and improving resource availability. Live map with real-time status A color-coded map displays vehicle status: green for active, red for urgent attention, gray for idle. Result: Supervisors quickly identified inactive or delayed vehicles and rerouted resources as needed. Role-based personalization Personalization options were built in, allowing each role to customize dashboard views. Example: Fleet managers prioritized financial KPIs, while technicians filtered for maintenance alerts and overdue service reports. Strategic Impact: The dashboard redesign was not only about improving visuals. It changed how teams interacted with data. Operators no longer needed to search for insights, as the system presented them in line with tasks and decision-making. The dashboard became a shared reference for teams with different goals, enabling real-time problem solving, fewer manual checks, and stronger alignment across roles. Every element was designed to build both understanding and confidence in action. 2. Hospitality Revenue Dashboard One of our clients, a hospitality group with 11 hotels in the UAE, faced a growing strategic gap. They had data from multiple departments, including bookings, events, food and beverage, and profit and loss, but it was spread across disconnected dashboards. Strategic Design Improvements and Outcomes: All revenue streams (rooms, restaurants, bars, and profit and loss) were consolidated into a single filterable dashboard. Example: A revenue manager could filter by property to see if a drop in restaurant revenue was tied to lower occupancy or was an isolated issue. The structure supported daily operations, weekly reviews, and quarterly planning. Disconnected charts and metrics were replaced with a unified visual narrative showing how revenue streams interacted. Example: The dashboard revealed how event bookings influenced bar sales or staffing. This shifted teams from passive data consumption to active interpretation. AI modules for demand forecasting, spend prediction, and pricing recommendations were embedded in the dashboard. Result: Managers could test rate changes with interactive sliders and instantly view effects on occupancy, revenue per available room, and food and beverage income. This enabled proactive scenario planning. Compact, color-coded sparklines were placed next to each key metric to show short- and long-term trends. Result: These visuals made it easy to spot seasonal shifts or channel-specific patterns without switching views or opening separate reports. Predictive overlays such as forecast bands and seasonality markers were added to performance graphs. Example: If occupancy rose but lagged behind seasonal forecasts, the dashboard surfaced the gap, prompting early action such as promotions or issue checks. Strategic Impact: By aligning the dashboard structure with real pricing and revenue strategies, the client shifted from static reporting to forward-looking decision-making. This was not a cosmetic interface update. It was a complete rethinking of how data could support business goals. The result enabled every team, from finance to operations, to interpret data based on their specific roles and responsibilities. 3. Healthcare Interoperability Dashboard In healthcare, timely and accurate access to patient information is essential. A multi-specialist hospital client struggled with fragmented data. Doctors had to consult separate platforms such as electronic health records, lab results, and pharmacy systems to understand a patientâs condition. This fragmented process slowed decision-making and increased risks to patient safety. Strategic Design Improvements and Outcomes: Patient medical history was integrated to unify lab reports, medications, and allergy information in one view. Example: A cardiologist, for example, could review recent cardiac markers with active medications and allergy alerts in the same place, enabling faster diagnosis and treatment. Lab report tracking was upgraded to show test type, date, status, and a clear summary with labels such as Pending, Completed, and Awaiting Review. Result: Trends were displayed with sparklines and color-coded indicators, helping clinicians quickly spot abnormalities or improvements. A medication management module was added for prescription entry, viewing, and exporting. It included dosage, frequency, and prescribing physician details. Example: Specialists could customize it to highlight drugs relevant to their practice, reducing overload and focusing on critical treatments. Rapid filtering options were introduced to search by patient name, medical record number, date of birth, gender, last visit, insurance company, or policy number. Example: Billing staff could locate patients by insurance details, while clinicians filtered records by visits or demographics. Visual transparency was provided through interactive tooltips explaining alert rationales and flagged data points. Result: Clinicians gained immediate context, such as the reason a lab value was marked as critical, supporting informed and timely decisions. Strategic Impact: Our design encourages active decision-making instead of passive data review. Interactive tooltips ensure visual transparency by explaining the rationale behind alerts and flagged data points. These information boxes give clinicians immediate context, such as why a lab value is marked critical, helping them understand implications and next steps without delay. Key UX Insights from the Above 3 Examples Design should drive conclusions, not just display data. Contextualized data enabled faster and more confident decisions. For example, a logistics dashboard flagged high-risk delays so dispatchers could act immediately. Complexity should be structured, not eliminated. Tools used timelines, layering, and progressive disclosure to handle dense information. A financial tool groups transactions by time blocks, easing cognitive load without losing detail. Trust requires clear system logic. Users trusted predictive alerts only after understanding their triggers. A healthcare interface added a "Why this alert?" option that explained the reasoning. The aim is clarity and action, not visual polish. Redesigns improved speed, confidence, and decision-making. In real-time contexts, confusion delays are more harmful than design flaws. Final Takeaways Real-time dashboards are not about overwhelming users with data. They are about helping them act quickly and confidently. The most effective dashboards reduce noise, highlight the most important metrics, and support decision-making in complex environments. Success lies in balancing visual clarity with cognitive ease while accounting for human limits like memory, stress, and attention alongside technical needs. Do: Prioritize key metrics in a clear order so priorities are obvious. For instance, a support manager may track open tickets before response times. Use subtle micro-animations and small visual cues to indicate changes, helping users spot trends without distraction. Display data freshness and sync status to build trust. Plan for edge cases like incomplete or offline data to keep the experience consistent. Ensure accessibility with high contrast, ARIA labels, and keyboard navigation. Donât: Overcrowd the interface with too many metrics. Rely only on color to communicate critical information. Update all data at once or too often, which can cause overload. Hide failures or delays; transparency helps users adapt. Over time, Iâve come to see real-time dashboards as decision assistants rather than control panels. When users say, âThis helps me stay in control,â it reflects a design built on empathy that respects cognitive limits and enhances decision-making. That is the true measure of success.
Integrating CSS Cascade Layers To An Existing Project
The idea behind this is to share a full, unfiltered look at integrating CSS Cascade Layers into an existing legacy codebase. In practice, itâs about refactoring existing CSS to use cascade layers without breaking anything.
You can always get a fantastic overview of things in Stephenie Ecklesâ article, âGetting Started With CSS Cascade Layersâ. But letâs talk about the experience of integrating cascade layers into real-world code, the good, the bad, and the spaghetti. I could have created a sample project for a classic walkthrough, but nah, thatâs not how things work in the real world. I want to get our hands dirty, like inheriting code with styles that work and no one knows why. Finding projects without cascade layers was easy. The tricky part was finding one that was messy enough to have specificity and organisation issues, but broad enough to illustrate different parts of cascade layers integration. Ladies and gentlemen, I present you with this Discord bot website by Drishtant Ghosh. Iâm deeply grateful to Drishtant for allowing me to use his work as an example. This project is a typical landing page with a navigation bar, a hero section, a few buttons, and a mobile menu. You see how it looks perfect on the outside. Things get interesting, however, when we look at the CSS styles under the hood. Understanding The Project Before we start throwing @layers around, letâs get a firm understanding of what weâre working with. I cloned the GitHub repo, and since our focus is working with CSS Cascade Layers, Iâll focus only on the main page, which consists of three files: index.html, index.css, and index.js. Note: I didnât include other pages of this project as itâd make this tutorial too verbose. However, you can refactor the other pages as an experiment. The index.css file is over 450 lines of code, and skimming through it, I can see some red flags right off the bat: Thereâs a lot of code repetition with the same selectors pointing to the same HTML element. There are quite a few #id selectors, which one might argue shouldnât be used in CSS (and I am one of those people). #botLogo is defined twice and over 70 lines apart. The !important keyword is used liberally throughout the code. And yet the site works. There is nothing âtechnicallyâ wrong here, which is another reason CSS is a big, beautiful monster â errors are silent! Planning The Layer Structure Now, some might be thinking, âCanât we simply move all of the styles into a single layer, like @layer legacy and call it a day?â You could⊠but I donât think you should. Think about it: If more layers are added after the legacy layer, they should override the styles contained in the legacy layer because the specificity of layers is organized by priority, where the layers declared later carry higher priority. /* new is more specific */ @layer legacy, new; /* legacy is more specific */ @layer new, legacy; That said, we must remember that the siteâs existing styles make liberal use of the !important keyword. And when that happens, the order of cascade layers gets reversed. So, even though the layers are outlined like this: @layer legacy, new; âŠany styles with an !important declaration suddenly shake things up. In this case, the priority order becomes: !important styles in the legacy layer (most powerful), !important styles in the new layer, Normal styles in the new layer, Normal styles in the legacy layer (least powerful). I just wanted to clear that part up. Letâs continue. We know that cascade layers handle specificity by creating an explicit order where each layer has a clear responsibility, and later layers always win. So, I decided to split things up into five distinct layers: reset: Browser default resets like box-sizing, margins, and paddings. base: Default styles of HTML elements, like body, h1, p, a, etc., including default typography and colours. layout: Major page structure stuff for controlling how elements are positioned. components: Reusable UI segments, like buttons, cards, and menus. utilities: Single helper modifiers that do just one thing and do it well. This is merely how I like to break things out and organize styles. Zell Liew, for example, has a different set of four buckets that could be defined as layers. Thereâs also the concept of dividing things up even further into sublayers: @layer components { /* sub-layers */ @layer buttons, cards, menus; } /* or this: */ @layer components.buttons, components.cards, components.menus; That might come in handy, but I also donât want to overly abstract things. That might be a better strategy for a project thatâs scoped to a well-defined design system. Another thing we could leverage is unlayered styles and the fact that any styles not contained in a cascade layer get the highest priority: @layer legacy { a { color: red !important; } } @layer reset { a { color: orange !important; } } @layer base { a { color: yellow !important; } } /* unlayered */ a { color: green !important; } /* highest priority */ But I like the idea of keeping all styles organized in explicit layers because it keeps things modular and maintainable, at least in this context. Letâs move on to adding cascade layers to this project. Integrating Cascade Layers We need to define the layer order at the top of the file: @layer reset, base, layout, components, utilities; This makes it easy to tell which layer takes precedence over which (they get more priority from left to right), and now we can think in terms of layer responsibility instead of selector weight. Moving forward, Iâll proceed through the stylesheet from top to bottom. First, I noticed that the Poppins font was imported in both the HTML and CSS files, so I removed the CSS import and left the one in index.html, as thatâs generally recommended for quickly loading fonts. Next is the universal selector (*) styles, which include classic reset styles that are perfect for @layer reset: @layer reset { * { margin: 0; padding: 0; box-sizing: border-box; } } With that out of the way, the body selector is next. Iâm putting this into @layer base because it contains core styles for the project, like backgrounds and fonts: @layer base { body { background-image: url("bg.svg"); /* Renamed to bg.svg for clarity */ font-family: "Poppins", sans-serif; /* ... other styles */ } } The way Iâm tackling this is that styles in the base layer should generally affect the whole document. So far, no page breaks or anything. Swapping IDs For Classes Following the body element selector is the page loader, which is defined as an ID selector, #loader. Iâm a firm believer in using class selectors over ID selectors as much as possible. It keeps specificity low by default, which prevents specificity battles and makes the code a lot more maintainable. So, I went into the index.html file and refactored elements with id="loader" to class="loader". In the process, I saw another element with id="page" and changed that at the same time. While still in the index.html file, I noticed a few div elements missing closing tags. It is astounding how permissive browsers are with that. Anyways, I cleaned those up and moved the <script> tag out of the .heading element to be a direct child of body. Letâs not make it any tougher to load our scripts. Now that weâve levelled the specificity playing field by moving IDs to classes, we can drop them into the components layer since a loader is indeed a reusable component: @layer components { .loader { width: 100%; height: 100vh; /* ... */ } .loader .loading { /* ... */ } .loader .loading span { /* ... */ } .loader .loading span:before { /* ... */ } } Animations Next are keyframes, and this was a bit tricky, but I eventually chose to isolate animations in their own new fifth layer and updated the layer order to include it: @layer reset, base, layout, components, utilities, animations; But why place animations as the last layer? Because animations are generally the last to run and shouldnât be affected by style conflicts. I searched the projectâs styles for @keyframes and dumped them into the new layer: @layer animations { @keyframes loading { /* ... */ } @keyframes loading2 { /* ... */ } @keyframes pageShow { /* ... */ } } This gives a clear distinction of static styles from dynamic ones while also enforcing reusability. Layouts The #page selector also has the same issue as #id, and since we fixed it in the HTML earlier, we can modify it to .page and drop it in the layout layer, as its main purpose is to control the initial visibility of the content: @layer layout { .page { display: none; } } Custom Scrollbars Where do we put these? Scrollbars are global elements that persist across the site. This might be a gray area, but Iâd say it fits perfectly in @layer base since itâs a global, default feature. @layer base { /* ... */ ::-webkit-scrollbar { width: 8px; } ::-webkit-scrollbar-track { background: #0e0e0f; } ::-webkit-scrollbar-thumb { background: #5865f2; border-radius: 100px; } ::-webkit-scrollbar-thumb:hover { background: #202225; } } I also removed the !important keywords as I came across them. Navigation The nav element is pretty straightforward, as it is the main structure container that defines the position and dimensions of the navigation bar. It should definitely go in the layout layer: @layer layout { /* ... */ nav { display: flex; height: 55px; width: 100%; padding: 0 50px; /* Consistent horizontal padding */ /* ... */ } } Logo We have three style blocks that are tied to the logo: nav .logo, .logo img, and #botLogo. These names are redundant and could benefit from inheritance component reusability. Hereâs how Iâm approaching it: The nav .logo is overly specific since the logo can be reused in other places. I dropped the nav so that the selector is just .logo. There was also an !important keyword in there, so I removed it. I updated .logo to be a Flexbox container to help position .logo img, which was previously set with less flexible absolute positioning. The #botLogo ID is declared twice, so I merged the two rulesets into one and lowered its specificity by making it a .botLogo class. And, of course, I updated the HTML to replace the ID with the class. The .logo img selector becomes .botLogo, making it the base class for styling all instances of the logo. Now, weâre left with this: /* initially .logo img */ .botLogo { border-radius: 50%; height: 40px; border: 2px solid #5865f2; } /* initially #botLogo */ .botLogo { border-radius: 50%; width: 180px; /* ... */ } The difference is that one is used in the navigation and the other in the hero section heading. We can transform the second .botLogo by slightly increasing the specificity with a .heading .botLogo selector. We may as well clean up any duplicated styles as we go. Letâs place the entire code in the components layer as weâve successfully turned the logo into a reusable component: @layer components { /* ... */ .logo { font-size: 30px; font-weight: bold; color: #fff; display: flex; align-items: center; gap: 10px; } .botLogo { aspect-ratio: 1; /* maintains square dimensions with width */ border-radius: 50%; width: 40px; border: 2px solid #5865f2; } .heading .botLogo { width: 180px; height: 180px; background-color: #5865f2; box-shadow: 0px 0px 8px 2px rgba(88, 101, 242, 0.5); /* ... */ } } This was a bit of work! But now the logo is properly set up as a component that fits perfectly in the new layer architecture. Navigation List This is a typical navigation pattern. Take an unordered list (<ul>) and turn it into a flexible container that displays all of the list items horizontally on the same row (with wrapping allowed). Itâs a type of navigation that can be reused, which belongs in the components layer. But thereâs a little refactoring to do before we add it. Thereâs already a .mainMenu class, so letâs lean into that. Weâll swap out any nav ul selectors with that class. Again, it keeps specificity low while making it clearer what that element does. @layer components { /* ... */ .mainMenu { display: flex; flex-wrap: wrap; list-style: none; } .mainMenu li { margin: 0 4px; } .mainMenu li a { color: #fff; text-decoration: none; font-size: 16px; /* ... */ } .mainMenu li a:where(.active, .hover) { color: #fff; background: #1d1e21; } .mainMenu li a.active:hover { background-color: #5865f2; } } There are also two buttons in the code that are used to toggle the navigation between âopenâ and âclosedâ states when the navigation is collapsed on smaller screens. Itâs tied specifically to the .mainMenu component, so weâll keep everything together in the components layer. We can combine and simplify the selectors in the process for cleaner, more readable styles: @layer components { /* ... */ nav:is(.openMenu, .closeMenu) { font-size: 25px; display: none; cursor: pointer; color: #fff; } } I also noticed that several other selectors in the CSS were not used anywhere in the HTML. So, I removed those styles to keep things trim. There are automated ways to go about this, too. Media Queries Should media queries have a dedicated layer (@layer responsive), or should they be in the same layer as their target elements? I really struggled with that question while refactoring the styles for this project. I did some research and testing, and my verdict is the latter, that media queries ought to be in the same layer as the elements they affect. My reasoning is that keeping them together: Maintains responsive styles with their base element styles, Makes overrides predictable, and Flows well with component-based architecture common in modern web development. However, it also means responsive logic is scattered across layers. But it beats the one with a gap between the layer where elements are styled and the layer where their responsive behaviors are managed. Thatâs a deal-breaker for me because itâs way too easy to update styles in one layer and forget to update their corresponding responsive style in the responsive layer. The other big point is that media queries in the same layer have the same priority as their elements. This is consistent with my overall goal of keeping the CSS Cascade simple and predictable, free of style conflicts. Plus, the CSS nesting syntax makes the relationship between media queries and elements super clear. Hereâs an abbreviated example of how things look when we nest media queries in the components layer: @layer components { .mainMenu { display: flex; flex-wrap: wrap; list-style: none; } @media (max-width: 900px) { .mainMenu { width: 100%; text-align: center; height: 100vh; display: none; } } } This also allows me to nest a componentâs child element styles (e.g., nav .openMenu and nav .closeMenu). @layer components { nav { &.openMenu { display: none; @media (max-width: 900px) { &.openMenu { display: block; } } } } } Typography & Buttons The .title and .subtitle can be seen as typography components, so they and their responsive associates go into â you guessed it â the components layer: @layer components { .title { font-size: 40px; font-weight: 700; /* etc. */ } .subtitle { color: rgba(255, 255, 255, 0.75); font-size: 15px; /* etc.. */ } @media (max-width: 420px) { .title { font-size: 30px; } .subtitle { font-size: 12px; } } } What about buttons? Like many websiteâs this one has a class, .btn, for that component, so we can chuck those in there as well: @layer components { .btn { color: #fff; background-color: #1d1e21; font-size: 18px; /* etc. */ } .btn-primary { background-color: #5865f2; } .btn-secondary { transition: all 0.3s ease-in-out; } .btn-primary:hover { background-color: #5865f2; box-shadow: 0px 0px 8px 2px rgba(88, 101, 242, 0.5); /* etc. */ } .btn-secondary:hover { background-color: #1d1e21; background-color: rgba(88, 101, 242, 0.7); } @media (max-width: 420px) { .btn { font-size: 14px; margin: 2px; padding: 8px 13px; } } @media (max-width: 335px) { .btn { display: flex; flex-direction: column; } } } The Final Layer We havenât touched the utilities layer yet! Iâve reserved this layer for helper classes that are designed for specific purposes, like hiding content â or, in this case, thereâs a .noselect class that fits right in. It has a single reusable purpose: to disable selection on an element. So, thatâs going to be the only style rule in our utilities layer: @layer utilities { .noselect { -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -webkit-user-drag: none; -moz-user-select: none; -ms-user-select: none; user-select: none; } } And thatâs it! Weâve completely refactored the CSS of a real-world project to use CSS Cascade Layers. You can compare where we started with the final code. It Wasnât All Easy Thatâs not to say that working with Cascade Layers was challenging, but there were some sticky points in the process that forced me to pause and carefully think through what I was doing. I kept some notes as I worked: Itâs tough to determine where to start with an existing project. However, by defining the layers first and setting their priority levels, I had a framework for deciding how and where to move specific styles, even though I was not totally familiar with the existing CSS. That helped me avoid situations where I might second-guess myself or define extra, unnecessary layers. Browser support is still a thing! I mean, Cascade Layers enjoy 94% support coverage as Iâm writing this, but you might be one of those sites that needs to accommodate legacy browsers that are unable to support layered styles. It wasnât clear where media queries fit into the process. Media queries put me on the spot to find where they work best: nested in the same layers as their selectors, or in a completely separate layer? I went with the former, as you know. The !important keyword is a juggling act. They invert the entire layering priority system, and this project was littered with instances. Once you start chipping away at those, the existing CSS architecture erodes and requires a balance between refactoring the code and fixing whatâs already there to know exactly how styles cascade. Overall, refactoring a codebase for CSS Cascade Layers is a bit daunting at first glance. The important thing, though, is to acknowledge that it isnât really the layers that complicate things, but the existing codebase. Itâs tough to completely overhaul someoneâs existing approach for a new one, even if the new approach is elegant. Where Cascade Layers Helped (And Didnât) Establishing layers improved the code, no doubt. Iâm sure there are some performance benchmarks in there since we were able to remove unused and conflicting styles, but the real win is in a more maintainable set of styles. Itâs easier to find what you need, know what specific style rules are doing, and where to insert new styles moving forward. At the same time, I wouldnât say that Cascade Layers are a silver bullet solution. Remember, CSS is intrinsically tied to the HTML structure it queries. If the HTML youâre working with is unstructured and suffers from div-itus, then you can safely bet that the effort to untangle that mess is higher and involves rewriting markup at the same time. Refactoring CSS for cascade layers is most certainly worth the maintenance enhancements alone. It may be âeasierâ to start from scratch and define layers as you work from the ground up because thereâs less inherited overhead and technical debt to sort through. But if you have to start from an existing codebase, you might need to de-tangle the complexity of your styles first to determine exactly how much refactoring youâre looking at.
Designing For TV: Principles, Patterns And Practical Guidance (Part 2)
After covering in detail the underlying interaction paradigms of TV experiences in [Part 1](https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/), itâs time to get practical. In the second part of the series, youâll explore the building blocks of the â10-foot experienceâ and how to best utilise them in your designs.
Having covered the developmental history and legacy of TV in Part 1, letâs now delve into more practical matters. As a quick reminder, the â10-foot experienceâ and its reliance on the six core buttons of any remote form the basis of our efforts, and as youâll see, most principles outlined simply reinforce the unshakeable foundations. In this article, weâll sift through the systems, account for layout constraints, and distill the guidelines to understand the essence of TV interfaces. Once weâve collected all the main ingredients, weâll see what we can do to elevate these inherently simplistic experiences. Letâs dig in, and letâs get practical! The Systems When it comes to hardware, TVs and set-top boxes are usually a few generations behind phones and computers. Their components are made to run lightweight systems optimised for viewing, energy efficiency, and longevity. Yet even within these constraints, different platforms offer varying performance profiles, conventions, and price points. Some notable platforms/systems of today are: Roku, the most affordable and popular, but severely bottlenecked by weak hardware. WebOS, most common on LG devices, relies on web standards and runs well on modest hardware. Android TV, considered very flexible and customisable, but relatively demanding hardware-wise. Amazon Fire, based on Android but with a separate ecosystem. It offers great smooth performance, but is slightly more limited than stock Android. tvOS, by Apple, offering a high-end experience followed by a high-end price with extremely low customizability. Despite their differences, all of the platforms above share something in common, and by now youâve probably guessed that it has to do with the remote. Letâs take a closer look: If these remotes were stripped down to just the D-pad, OK, and BACK buttons, they would still be capable of successfully navigating any TV interface. It is this shared control scheme that allows for the agnostic approach of this article with broadly applicable guidelines, regardless of the manufacturer. Having already discussed the TV remote in detail in Part 1, letâs turn to the second part of the equation: the TV screen, its layout, and the fundamental building blocks of TV-bound experiences. TV Design Fundamentals The Screen With almost one hundred years of legacy, TV has accumulated quite some baggage. One recurring topic in modern articles on TV design is the concept of âoverscanâ â a legacy concept from the era of cathode ray tube (CRT) screens. Back then, the lack of standards in production meant that television sets would often crop the projected image at its edges. To address this inconsistency, broadcasters created guidelines to keep important content from being cut off. While overscan gets mentioned occasionally, we should call it what it really is â a thing of the past. Modern panels display content with greater precision, making thinking in terms of title and action safe areas rather archaic. Today, we can simply consider the margins and get the same results. Google calls for a 5% margin layout and Apple advises a 60-point margin top and bottom, and 80 points on the sides in their Layout guidelines. The standard is not exactly clear, but the takeaway is simple: leave some breathing room between screen edge and content, like you would in any thoughtful layout. Having left some baggage behind, we can start considering what to put within and outside the defined bounds. The Layout Considering the device is made for content consumption, streaming apps such as Netflix naturally come to mind. Broadly speaking, all these interfaces share a common layout structure where a vast collection of content is laid out in a simple grid. These horizontally scrolling groups (sometimes referred to as âshelvesâ) resemble rows of a bookcase. Typically, theyâll contain dozens of items that donât fit into the initial âfoldâ, so weâll make sure the last visible item âpeeksâ from the edge, subtly indicating to the viewer thereâs more content available if they continue scrolling. If we were to define a standard 12-column layout grid, with a 2-column-wide item, weâd end up with something like this: As you can see, the last item falls outside the âsafeâ zone. Tip: A useful trick I discovered when designing TV interfaces was to utilise an odd number of columns. This allows the last item to fall within the defined margins and be more prominent while having little effect on the entire layout. Weâve concluded that overscan is not a prominent issue these days, yet an additional column in the layout helps completely circumvent it. Food for thought! Typography TV design requires us to practice restraint, and this becomes very apparent when working with type. All good typography practices apply to TV design too, but Iâd like to point out two specific takeaways. First, accounting for the distance, everything (including type) needs to scale up. Where 16â18px might suffice for web baseline text, 24px should be your starting point on TV, with the rest of the scale increasing proportionally. âTypography can become especially tricky in 10-ft experiences. When in doubt, go larger.â â Molly Lafferty (Marvel Blog) With that in mind, the second piece of advice would be to start with a small 5â6 size scale and adjust if necessary. The simplicity of a TV experience can, and should, be reflected in the typography itself, and while small, such a scale will do all the âheavy liftingâ if set correctly. What you see in the example above is a scale I reduced from Google and Apple guidelines, with a few size adjustments. Simple as it is, this scale served me well for years, and I have no doubt it could do the same for you. Freebie If youâd like to use my basic reduced type scale Figma design file for kicking off your own TV project, feel free to do so! Color Imagine watching TV at night with the device being the only source of light in the room. You open up the app drawer and select a new streaming app; it loads into a pretty splash screen, and â bam! â a bright interface opens up, which, amplified by the dark surroundings, blinds you for a fraction of a second. That right there is our main consideration when using color on TV. Built for cinematic experiences and often used in dimly lit environments, TVs lend themselves perfectly to darker and more subdued interfaces. Bright colours, especially pure white (#ffffff), will translate to maximum luminance and may be straining on the eyes. As a general principle, you should rely on a more muted color palette. Slightly tinting brighter elements with your brand color, or undertones of yellow to imitate natural light, will produce less visually unsettling results. Finally, without a pointer or touch capabilities, itâs crucial to clearly highlight interactive elements. While using bright colors as backdrops may be overwhelming, using them sparingly to highlight element states in a highly contrasting way will work perfectly. A focus state is the underlying principle of TV navigation. Most commonly, it relies on creating high contrast between the focused and unfocused elements. (Large preview) This highlighting of UI elements is what TV leans on heavily â and it is what weâll discuss next. Focus In Part 1, we have covered how interacting through a remote implies a certain detachment from the interface, mandating reliance on a focus state to carry the burden of TV interaction. This is done by visually accenting elements to anchor the userâs eyes and map any subsequent movement within the interface. If you have ever written HTML/CSS, you might recall the use of the :focus CSS pseudo-class. While itâs primarily an accessibility feature on the web, itâs the core of interaction on TV, with more flexibility added in the form of two additional directions thanks to a dedicated D-pad. Focus Styles There are a few standard ways to style a focus state. Firstly, thereâs scaling â enlarging the focused element, which creates the illusion of depth by moving it closer to the viewer. Example of scaling elements on focus. This is especially common in cases where only images are used for focusable elements. (Large preview) Another common approach is to invert background and text colors. Color inversion on focus, common for highlighting cards. (Large preview) Finally, a border may be added around the highlighted element. Example of border highlights on focus. (Large preview) These styles, used independently or in various combinations, appear in all TV interfaces. While execution may be constrained by the specific system, the purpose remains the same: clear and intuitive feedback, even from across the room. The three basic styles can be combined to produce more focus state variants. (Large preview) Having set the foundations of interaction, layout, and movement, we can start building on top of them. The next chapter will cover the most common elements of a TV interface, their variations, and a few tips and tricks for button-bound navigation. Common TV UI Components Nowadays, the core user journey on television revolves around browsing (or searching through) a content library, selecting an item, and opening a dedicated screen to watch or listen. This translates into a few fundamental screens: Library (or Home) for content browsing, Search for specific queries, and A player screen focused on content playback. These screens are built with a handful of components optimized for the 10-foot experience, and while they are often found on other platforms too, itâs worth examining how they differ on TV. Menus Appearing as a horizontal bar along the top edge of the screen, or as a vertical sidebar, the menu helps move between the different screens of an app. While its orientation mostly depends on the specific system, it does seem TV favors the side menu a bit more. Both menu types share a common issue: the farther the user navigates away from the menu (vertically, toward the bottom for top-bars; and horizontally, toward the right for sidebars), the more button presses are required to get back to it. Fortunately, usually a Back button shortcut is added to allow for immediate menu focus, which greatly improves usability. 16:9 posters abide by the same principles but with a horizontal orientation. They are often paired with text labels, which effectively turn them into cards, commonly seen on platforms like YouTube. In the absence of dedicated poster art, they show stills or playback from the videos, matching the aspect ratio of the media itself. 1:1 posters are often found in music apps like Spotify, their shape reminiscent of album art and vinyl sleeves. These squares often get used in other instances, like representing channel links or profile tiles, giving more visual variety to the interface. All of the above can co-exist within a single app, allowing for richer interfaces and breaking up otherwise uniform content libraries. And speaking of breaking up content, letâs see what we can do with spotlights! Spotlights Typically taking up the entire width of the screen, these eye-catching components will highlight a new feature or a promoted piece of media. In a sea of uniform shelves, they can be placed strategically to introduce aesthetic diversity and disrupt the monotony. A spotlight can be a focusable element by itself, or it could expose several actions thanks to its generous space. In my ventures into TV design, I relied on a few different spotlight sizes, which allowed me to place multiples into a single row, all with the purpose of highlighting different aspects of the app, without breaking the form to which viewers were used. Posters, cards, and spotlights shape the bulk of the visual experience and content presentation, but viewers still need a way to find specific titles. Letâs see how search and input are handled on TV. Search And Entering Text Manually browsing through content libraries can yield results, but having the ability to search will speed things up â though not without some hiccups. TVs allow for text input in the form of on-screen keyboards, similar to the ones found in modern smartphones. However, inputting text with a remote control is quite inefficient given the restrictiveness of its control scheme. For example, typing âhey thereâ on a mobile keyboard requires 9 keystrokes, but about 38 on a TV (!) due to the movement between characters and their selection. Typing with a D-pad may be an arduous task, but at the same time, having the ability to search is unquestionably useful. Luckily for us, keyboards are accounted for in all systems and usually come in two varieties. Weâve got the grid layouts used by most platforms and a horizontal layout in support of the touch-enabled and gesture-based controls on tvOS. Swiping between characters is significantly faster, but this is yet another pattern that can only be enhanced, not replaced. Modernization has made things significantly easier, with search autocomplete suggestions, device pairing, voice controls, and remotes with physical keyboards, but on-screen keyboards will likely remain a necessary fallback for quite a while. And no matter how cumbersome this fallback may be, we as designers need to consider it when building for TV. Players And Progress Bars While all the different sections of a TV app serve a purpose, the Player takes center stage. Itâs where all the roads eventually lead to, and where viewers will spend the most time. Itâs also one of the rare instances where focus gets lost, allowing for the interface to get out of the way of enjoying a piece of content. Arguably, players are the most complex features of TV apps, compacting all the different functionalities into a single screen. Take YouTube, for example, its player doesnât just handle expected playback controls but also supports content browsing, searching, reading comments, reacting, and navigating to channels, all within a single screen. Compared to YouTube, Netflix offers a very lightweight experience guided by the nature of the app. Still, every player has a basic set of controls, the foundation of which is the progress bar. The progress bar UI element serves as a visual indicator for content duration. During interaction, focus doesnât get placed on the bar itself, but on a movable knob known as the âscrubber.â It is by moving the scrubber left and right, or stopping it in its tracks, that we can control playback. Another indirect method of invoking the progress bar is with the good old Play and Pause buttons. Rooted in the mechanical era of tape players, the universally understood triangle and two vertical bars are as integral to the TV legacy as the D-pad. No matter how minimalist and sleek the modern player interface may be, these symbols remain a staple of the viewing experience. The presence of a scrubber may also indicate the type of content. Video on demand allows for the full set of playback controls, while live streams (unless DVR is involved) will do away with the scrubber since viewers wonât be able to rewind or fast-forward. Earlier iterations of progress bars often came bundled with a set of playback control buttons, but as viewers got used to the tools available, these controls often got consolidated into the progress bar and scrubber themselves. Bringing It All Together With the building blocks out of the box, weâve got everything necessary for a basic but functional TV app. Just as the six core buttons make remote navigation possible, the components and principles outlined above help guide purposeful TV design. The more context you bring, the more youâll be able to expand and combine these basic principles, creating an experience unique to your needs. Before we wrap things up, Iâd like to share a few tips and tricks I discovered along the way â tips and tricks which I wish I had known from the start. Regardless of how simple or complex your idea may be, these may serve you as useful tools to help add depth, polish, and finesse to any TV experience. Thinking Beyond The Basics Like any platform, TV has a set of constraints that we abide by when designing. But sometimes these norms are applied without question, making the already limited capabilities feel even more restraining. Below are a handful of less obvious ideas that can help you design more thoughtfully and flexibly for the big screen. Long Press Most modern remotes support press-and-hold gestures as a subtle way to enhance the functionality, especially on remotes with fewer buttons available. For example, holding directional buttons when browsing content speeds up scrolling, while holding Left/Right during playback speeds up timeline seeking. In many apps, a single press of the OK button opens a video, but holding it for longer opens a contextual menu with additional actions. With limited input, context becomes a powerful tool. It not only declutters the interface to allow for more focus on specific tasks, but also enables the same set of buttons to trigger different actions based on the viewerâs location within an app. Another great example is YouTubeâs scrubber interaction. Once the scrubber is moved, every other UI element fades. This cleans up the viewerâs working area, so to speak, narrowing the interface to a single task. In this state â and only in this state â pressing Up one more time moves away from scrubbing and into browsing by chapter. This is such an elegant example of expanding restraint, and adding more only when necessary. I hope it inspires similar interactions in your TV app designs. Efficient Movement On TV At its best, every action on TV âcostsâ at least one click. Thereâs no such thing as aimless cursor movement â if you want to move, you must press a button. Weâve seen how cumbersome it can be inside a keyboard, but thereâs also something we can learn about efficient movement in these restrained circumstances. Going back to the Homescreen, we can note that vertical and horizontal movement serve two distinct roles. Vertical movement switches between groups, while horizontal movement switches items within these groups. No matter how far youâve gone inside a group, a single vertical click will move you into another. Every step on TV âcostsâ an action, so we might as well optimize movement. (Large preview) This subtle difference â two axes with separate roles â is the most efficient way of moving in a TV interface. Reversing the pattern: horizontal to switch groups, and vertical to drill down, will work like a charm as long as you keep the role of each axis well defined. Properly applied in a vertical layout, the principles of optimal movement remain the same. (Large preview) Quietly brilliant and easy to overlook, this pattern powers almost every step of the TV experience. Remember it, and use it well. Thinking Beyond JPGs After covering in detail many of the technicalities, letâs finish with some visual polish. Most TV interfaces are driven by tightly packed rows of cover and poster art. While often beautifully designed, this type of content and layouts leave little room for visual flair. For years, the flat JPG, with its small file size, has been a go-to format, though contemporary alternatives like WebP are slowly taking its place. Meanwhile, we can rely on the tried and tested PNG to give a bit more shine to our TV interfaces. The simple fact that it supports transparency can help the often-rigid UIs feel more sophisticated. Used strategically and paired with simple focus effects such as background color changes, PNGs can bring subtle moments of delight to the interface. Having a transparent background blends well with surface color changes common in TV interfaces. (Large preview) And donât forget, transparency doesnât have to mean that there shouldn't be any background at all. (Large preview) Moreover, if transformations like scaling and rotating are supported, you can really make those rectangular shapes come alive with layering multiple assets. Combining multiple images along with a background color change can liven up certain sections. (Large preview) As you probably understand by now, these little touches of finesse donât go out of bounds of possibility. They simply find more room to breathe within it. But with such limited capabilities, itâs best to learn all the different tricks that can help make your TV experiences stand out. Closing Thoughts Rooted in legacy, with a limited control scheme and a rather âshallowâ interface, TV design reminds us to do the best with what we have at our disposal. The restraints I outlined are not meant to induce claustrophobia and make you feel limited in your design choices, but rather to serve you as guides. It is by accepting that fact that we can find freedom and new avenues to explore. This two-part series of articles, just like my experience designing for TV, was not about reinventing the wheel with radical ideas. It was about understanding its nuances and contributing to whatâs already there with my personal touch. If you find yourself working in this design field, I hope my guide will serve as a warm welcome and will help you do your finest work. And if you have any questions, do leave a comment, and I will do my best to reply and help. Good luck! Further Reading âDesign for TV,â by Android Developers Great TV design is all about putting content front and center. It's about creating an interface that's easier to use and navigate, even from a distance. It's about making it easier to find the content you love, and to enjoy it in the best possible quality. âTV Guidelines: A quick kick-off on designing for Television Experiences,â by Andrea Pacheco Just like designing a mobile app, designing a TV application can be a fun and complex thing to do, due to the numerous guidelines and best practices to follow. Below, I have listed the main best practices to keep in mind when designing an app for a 10-foot screen. âDesigning for Television â TV Ui design,â by Molly Lafferty Weâre no longer limited to a remote and cable box to control our TVs; weâre using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface thatâs much more powerful than your old-fashioned on-screen guide. âRethinking User Interface Design for the TV Platform,â by Pascal Potvin Designing for television has become part of the continuum of devices that require a rethink of how we approach user interfaces and user experiences. âTypography for TV,â by Android Developers As television screens are typically viewed from a distance, interfaces that use larger typography are more legible and comfortable for users. TV Design's default type scale includes contrasting and flexible type styles to support a wide range of use cases. âTypography,â by Apple Developer docs Your typographic choices can help you display legible text, convey an information hierarchy, communicate important content, and express your brand or style. âColor on TV,â by Android Developers Color on TV design can inspire, set the mood, and even drive users to make decisions. It's a powerful and tangible element that users notice first. As a rich way to connect with a wide audience, it's no wonder color is an important step in crafting a high-quality TV interface. âDesigning for Television â TV UI Design,â by Molly Lafferty (Marvel Blog) Today, weâre no longer limited to a remote and cable box to control our TVs; weâre using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface thatâs much more powerful than your old-fashioned on-screen guide.
A Breeze Of Inspiration In September (2025 Wallpapers Edition)
Could there be a better way to welcome the new month than with a new collection of desktop wallpapers? Weâve got some eye-catching designs to make your September just a bit more colorful. Enjoy!
September is just around the corner, and that means itâs time for some new wallpapers! For more than 14 years already, our monthly wallpapers series has been the perfect occasion for artists and designers to challenge their creative skills and take on a little just-for-fun project â telling the stories they want to tell, using their favorite tools. This always makes for a unique and inspiring collection of wallpapers month after month, and, of course, this September is no exception. In this post, youâll find desktop wallpapers for September 2025, created with love by the community for the community. As a bonus, weâve also added some oldies but goodies from our archives to the collection, so maybe youâll spot one of your almost-forgotten favorites in here, too? A huge thank-you to everyone who shared their artworks with us this month â this post wouldnât exist without your creativity and support! By the way, if youâd like to get featured in one of our upcoming wallpapers editions, please donât hesitate to submit your design. We are always looking for creative talent and canât wait to see your story come to life! You can click on every image to see a larger preview. We respect and carefully consider the ideas and motivation behind each and every artistâs work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werenât anyhow influenced by us but rather designed from scratch by the artists themselves. 21st Night Of September âOn the 21st night of September, the world danced in perfect harmony. Earth, Wind & Fire set the tone and now itâs your turn to keep the rhythm alive.â â Designed by Ginger IT Solutions from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Who Designed by Ricardo Gimenes from Spain. preview with calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Skating Through Chocolate Milk Day âCelebrate Chocolate Milk Day with a perfect blend of fun and flavor. From smooth sips to smooth rides, itâs all about enjoying the simple moments that make the day unforgettable.â â Designed by PopArt Studio from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Mood Designed by Ricardo Gimenes from Spain. preview with calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Funny Cats âCats are beautiful animals. Theyâre quiet, clean, and warm. Theyâre funny and can become an endless source of love and entertainment. Here for the cats!â â Designed by UrbanUI from India. preview without calendar: 360x640, 1024x768, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1680x1200, 1920x1080 Pigman And Robin Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Autumn Rains âThis autumn, we expect to see a lot of rainy days and blues, so we wanted to change the paradigm and wish a warm welcome to the new season. After all, if you come to think of it: rain is not so bad if you have an umbrella and a raincoat. Come autumn, we welcome you!â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Terrazzo âWith the end of summer and fall coming soon, I created this terrazzo pattern wallpaper to brighten up your desktop. Enjoy the month!â â Designed by Melissa Bogemans from Belgium. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Summer Ending âAs summer comes to an end, all the creatures pull back to their hiding places, searching for warmth within themselves and dreaming of neverending adventures under the tinted sky of closing dog days.â â Designed by Ana Masnikosa from Belgrade, Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Cacti Everywhere âSeasons come and go, but our brave cactuses still stand. Summer is almost over and autumn is coming, but the beloved plants donât care.â â Designed by LĂvia LĂ©nĂĄrt from Hungary. preview without calendar: 320x480, 800x480, 1024x768, 1024x1024, 1280x1024, 1400x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Flower Soul âThe earth has music for those who listen. Take a break and relax and while you drive out the stress, catch a glimpse of the beautiful nature around you. Can you hear the rhythm of the breeze blowing, the flowers singing, and the butterflies fluttering to cheer you up? We dedicate flowers which symbolize happiness and love to one and all.â â Designed by Krishnankutty from India. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Stay Or Leave? Designed by Ricardo Gimenes from Spain. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Rainy Flowers Designed by Teodora Vasileva from Bulgaria. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1280x720, 1280x960, 1280x1024, 1400x1050, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Listen Closer⊠The Mushrooms Are Growing âItâs this time of the year when children go to school and grown-ups go to collect mushrooms.â â Designed by Igor Izhik from Canada. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600 Weekend Relax Designed by Robert from the United States. preview without calendar: 320x480, 1024x1024, 1280x720, 1680x1200, 1920x1080, 2560x1440 Hungry Designed by Elise Vanoorbeek from Belgium. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1440x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 National Video Games Day Delight âSeptember 12th brings us National Video Games Day. US-based video game players love this day and celebrate with huge gaming tournaments. What was once a 2D experience in the home is now a global phenomenon with players playing against each other across statelines and national borders via the internet. National Video Games Day gives gamers the perfect chance to celebrate and socialize! So grab your controller, join online, and let the games begin!â â Designed by Ever Increasing Circles from the United Kingdom. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 More Bananas Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 National Elephant Appreciation Day âToday, we celebrate these magnificent creatures who play such a vital role in our ecosystems and cultures. Elephants are symbols of wisdom, strength, and loyalty. Their social bonds are strong, and their playful nature, especially in the young ones, reminds us of the importance of joy and connection in our lives.â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Long Live Summer âWhile Septemberâs Autumnal Equinox technically signifies the end of the summer season, this wallpaper is for all those summer lovers, like me, who donât want the sunshine, warm weather, and lazy days to end.â â Designed by Vicki Grunewald from Washington. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Bear Time Designed by Bojana Stojanovic from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1080, 1366x768, 1400x1050, 1440x990, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Still In Vacation Mood âItâs officially the end of summer and Iâm still in vacation mood, dreaming about all the amazing places Iâve seen. This illustration is inspired by a small town in France, on the Atlantic coast, right by the beach.â â Designed by Miruna Sfia from Romania. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1440x900, 1440x1050, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Maryland Pride âAs summer comes to a close, so does the end of blue crab season in Maryland. Blue crabs have been a regional delicacy since the 1700s and have become Marylandâs most valuable fishing industry, adding millions of dollars to the Maryland economy each year. The blue crab has contributed so much to the stateâs regional culture and economy, in 1989 it was named the State Crustacean, cementing its importance in Maryland history.â â Designed by The Hannon Group from Washington DC. preview without calendar: 320x480, 640x480, 800x600, 1024x768, 1280x960, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1440, 2560x1440 Summer In Costa Rica âWe continue in tropical climates. In this case, we travel to Costa Rica to observe the Arenal volcano from the lake while we use a kayak.â â Designed by Veronica Valenzuela from Spain. preview without calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440 Wine Harvest Season âWelcome to the wine harvest season in Serbia. Itâs September, and the hazy sunshine bathes the vines on the slopes of FruĆĄka Gora. Everything is ready for the making of Bermet, the most famous wine from Serbia. This spiced wine was a favorite of the Austro-Hungarian elite and was served even on the Titanic. Bermetâs recipe is a closely guarded secret, and the wine is produced by just a handful of families in the town of Sremski Karlovci, near Novi Sad. On the other side of Novi Sad, plains of corn and sunflower fields blend in with the horizon, catching the last warm sun rays of this year.â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Office âClean, minimalistic office for a productive day.â â Designed by Antun HirĆĄman from Croatia. preview without calendar: 320x480, 800x600, 1280x720, 1280x1024, 1440x900, 1680x1050, 1920x1080, 1920x1440, 2560x1440 Colors Of September âI love September. Its colors and smells.â â Designed by Juliagav from Ukraine. preview without calendar: 320x480, 1024x768, 1024x1024, 1280x800, 1280x1024, 1440x900, 1680x1050, 1920x1080, 2560x1440 Never Stop Exploring Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160
Prompting Is A Design Act: How To Brief, Guide And Iterate With AI
Prompting is more than giving AI some instructions. You could think of it as a design act, part creative brief and part conversation design. This second article on AI augmenting design work introduces a designerly approach to prompting: one that blends creative briefing, interaction design, and structural clarity.
In âA Week In The Life Of An AI-Augmented Designerâ, we followed Kateâs weeklong journey of her first AI-augmented design sprint. She had three realizations through the process: AI isnât a co-pilot (yet); itâs more like a smart, eager intern. One with access to a lot of information, good recall, fast execution, but no context. That mindset defined how she approached every interaction with AI: not as magic, but as management. Donât trust; guide, coach, and always verify. Like any intern, AI needs coaching and supervision, and thatâs where her designerly skills kicked in. Kate relied on curiosity to explore, observation to spot bias, empathy to humanize the output, and critical thinking to challenge what didnât feel right. Her learning mindset helped her keep up with advances, and experimentation helped her learn by doing. Prompting is part creative brief, and part conversation design, just with an AI instead of a person. When you prompt an AI, youâre not just giving instructions, but designing how it responds, behaves, and outputs information. If AI is like an intern, then the prompt is your creative brief that frames the task, sets the tone, and clarifies what good looks like. Itâs also your conversation script that guides how it responds, how the interaction flows, and how ambiguity is handled. As designers, weâre used to designing interactions for people. Prompting is us designing our own interactions with machines â it uses the same mindset with a new medium. It shapes an AIâs behavior the same way youâd guide a user with structure, clarity, and intent. If youâve bookmarked, downloaded, or saved prompts from others, youâre not alone. Weâve all done that during our AI journeys. But while someone elseâs prompts are a good starting point, you will get better and more relevant results if you can write your own prompts tailored to your goals, context, and style. Using someone elseâs prompt is like using a Figma template. It gets the job done, but mastery comes from understanding and applying the fundamentals of design, including layout, flow, and reasoning. Prompts have a structure too. And when you learn it, you stop guessing and start designing. Note: All prompts in this article were tested using ChatGPT â not because itâs the only game in town, but because itâs friendly, flexible, and lets you talk like a person, yes, even after the recent GPT-5 âupdateâ. That said, any LLM with a decent attention span will work. Results for the same prompt may vary based on the AI model you use, the AIâs training, mood, and how confidently it can hallucinate. Privacy PSA: As always, donât share anything you wouldnât want leaked, logged, or accidentally included in the next AI-generated meme. Keep it safe, legal, and user-respecting. With that out of the way, letâs dive into the mindset, anatomy, and methods of effective prompting as another tool in your design toolkit. Mindset: Prompt Like A Designer As designers, we storyboard journeys, wireframe interfaces to guide users, and write UX copy with intention. However, when prompting AI, we treat it differently: âSummarize these insightsâ, âMake this betterâ, âWrite copy for this screenâ, and then wonder why the output feels generic, off-brand, or just meh. Itâs like expecting a creative team to deliver great work from a one-line Slack message. We wouldnât brief a freelancer, much less an intern, with âDesign a landing page,â so why brief AI that way? Prompting Is A Creative Brief For A Machine Think of a good prompt as a creative brief, just for a non-human collaborator. It needs similar elements, including a clear role, defined goal, relevant context, tone guidance, and output expectations. Just as a well-written creative brief unlocks alignment and quality from your team, a well-structured prompt helps the AI meet your expectations, even though it doesnât have real instincts or opinions. Prompting Is Also Conversation Design A good prompt goes beyond defining the task and sets the tone for the exchange by designing a conversation: guiding how the AI interprets, sequences, and responds. You shape the flow of tasks, how ambiguity is handled, and how refinement happens â thatâs conversation design. Anatomy: Structure It Like A Designer So how do you write a designer-quality prompt? Thatâs where the W.I.R.E.+F.R.A.M.E. prompt design framework comes in â a UX-inspired framework for writing intentional, structured, and reusable prompts. Each letter represents a key design direction, grounded in the way UX designers already think: Just as a wireframe doesnât dictate final visuals, this WIRE+FRAME framework doesnât constrain creativity, but guides the AI with structured information it needs. âWhy not just use a series of back-and-forth chats with AI?â You can, and many people do. But without structure, AI fills in the gaps on its own, often with vague or generic results. A good prompt upfront saves time, reduces trial and error, and improves consistency. And whether youâre working on your own or across a team, a framework means youâre not reinventing a prompt every time but reusing what works to get better results faster. Just as we build wireframes before adding layers of fidelity, the WIRE+FRAME framework has two parts: WIRE is the must-have skeleton. It gives the prompt its shape. FRAME is the set of enhancements that bring polish, logic, tone, and reusability â like building a high-fidelity interface from the wireframe. Letâs improve Kateâs original research synthesis prompt (âRead this customer feedback and tell me how we can improve financial literacy for Gen Z in our appâ). To better reflect how people actually prompt in practice, letâs tweak it to a more broadly applicable version: âRead this customer feedback and tell me how we can improve our app for Gen Z users.â This one-liner mirrors the kinds of prompts we often throw at AI tools: short, simple, and often lacking structure. Now, weâll take that prompt and rebuild it using the first four elements of the W.I.R.E. framework â the core building blocks that provide AI with the main information it needs to deliver useful results. W: Who & What Define who the AI should be, and what itâs being asked to deliver. A creative brief starts with assigning the right hat. Are you briefing a copywriter? A strategist? A product designer? The same logic applies here. Give the AI a clear identity and task. Treat AI like a trusted freelancer or intern. Instead of saying âhelp meâ, tell it who it should act as and whatâs expected. Example: âYou are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.â I: Input Context Provide background that frames the task. Creative partners donât work in a vacuum. They need context: the audience, goals, product, competitive landscape, and whatâs been tried already. This is the âWhat you need to know before you startâ section of the brief. Think: key insights, friction points, business objectives. The same goes for your prompt. Example: âYou are analyzing customer feedback for Fintech Brandâs app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.â R: Rules & Constraints Clarify any limitations, boundaries, and exclusions. Good creative briefs always include boundaries â what to avoid, whatâs off-brand, or whatâs non-negotiable. Things like brand voice guidelines, legal requirements, or time and word count limits. Constraints donât limit creativity â they focus it. AI needs the same constraints to avoid going off the rails. Example: âOnly analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.â E: Expected Output Spell out what the deliverable should look like. This is the deliverable spec: What does the finished product look like? What tone, format, or channel is it for? Even if the task is clear, the format often isnât. Do you want bullet points or a story? A table or a headline? If you donât say, the AI will guess, and probably guess wrong. Even better, include an example of the output you want, an effective way to help AI know what youâre expecting. If youâre using GPT-5, you can also mix examples across formats (text, images, tables) together. Example: âReturn a structured list of themes. For each theme, include: Theme Title Summary of the Issue Problem Statement Opportunity Representative Quotes (from data only) Journey Stage(s) Frequency (count from data) Severity Score (1â5) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue Estimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes.â WIRE gives you everything you need to stop guessing and start designing your prompts with purpose. When you start with WIRE, your prompting is like a briefing, treating AI like a collaborator. Once youâve mastered this core structure, you can layer in additional fidelity, like tone, step-by-step flow, or iterative feedback, using the FRAME elements. These five elements provide additional guidance and clarity to your prompt by layering clear deliverables, thoughtful tone, reusable structure, and space for creative iteration. F: Flow of Tasks Break complex prompts into clear, ordered steps. This is your project plan or creative workflow that lays out the stages, dependencies, or sequence of execution. When the task has multiple parts, donât just throw it all into one sentence. You are doing the thinking and guiding AI. Structure it like steps in a user journey or modules in a storyboard. In this example, it fits as the blueprint for the AI to use to generate the table described in âE: Expected Outputâ Example: âRecommended flow of tasks: Step 1: Parse the uploaded data and extract discrete pain points. Step 2: Group them into themes based on pattern similarity. Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort. Step 4: Map each theme to the appropriate customer journey stage(s). Step 5: For each theme, write a clear problem statement and opportunity based only on whatâs in the data.â R: Reference Voice or Style Name the desired tone, mood, or reference brand. This is the brand voice section or style mood board â reference points that shape the creative feel. Sometimes you want buttoned-up. Other times, you want conversational. Donât assume the AI knows your tone, so spell it out. Example: âUse the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.â A: Ask for Clarification Invite the AI to ask questions before generating, if anything is unclear. This is your âAny questions before we begin?â moment â a key step in collaborative creative work. You wouldnât want a freelancer to guess what you meant if the brief was fuzzy, so why expect AI to do better? Ask AI to reflect or clarify before jumping into output mode. Example: âIf the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.â M: Memory (Within The Conversation) Reference earlier parts of the conversation and reuse whatâs working. This is similar to keeping visual tone or campaign language consistent across deliverables in a creative brief. Prompts are rarely one-shot tasks, so this reminds AI of the tone, audience, or structure already in play. GPT-5 got better with memory, but this still remains a useful element, especially if you switch topics or jump around. Example: âUnless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.â E: Evaluate & Iterate Invite the AI to critique, improve, or generate variations. This is your revision loop â your way of prompting for creative direction, exploration, and refinement. Just like creatives expect feedback, your AI partner can handle review cycles if you ask for them. Build iteration into the brief to get closer to what you actually need. Sometimes, you may see ChatGPT test two versions of a response on its own by asking for your preference. Example: âAfter listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort). For that top-priority theme: Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate? Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary). Rewrite the theme entry with that improvement applied. Briefly explain why the revision is stronger and more useful for product or design teams.â Hereâs a quick recap of the WIRE+FRAME framework: Framework Component Description W: Who & What Define the AI persona and the core deliverable. I: Input Context Provide background or data scope to frame the task. R: Rules & Constraints Set boundaries E: Expected Output Spell out the format and fields of the deliverable. F: Flow of Tasks Break the work into explicit, ordered sub-tasks. R: Reference Voice/Style Name the tone, mood, or reference brand to ensure consistency. A: Ask for Clarification Invite AI to pause and ask questions if any instructions or data are unclear before proceeding. M: Memory Leverage in-conversation memory to recall earlier definitions, examples, or phrasing without restating them. E: Evaluate & Iterate After generation, have the AI self-critique the top outputs and refine them. And hereâs the full WIRE+FRAME prompt: (W) You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities. (I) You are analyzing customer feedback for Fintech Brandâs app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts. (R) Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language. (E) Return a structured list of themes. For each theme, include: Theme Title Summary of the Issue Problem Statement Opportunity Representative Quotes (from data only) Journey Stage(s) Frequency (count from data) Severity Score (1â5) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue Estimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes (F) Recommended flow of tasks: Step 1: Parse the uploaded data and extract discrete pain points. Step 2: Group them into themes based on pattern similarity. Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort. Step 4: Map each theme to the appropriate customer journey stage(s). Step 5: For each theme, write a clear problem statement and opportunity based only on whatâs in the data. (R) Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads. (A) If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement. (M) Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each. (E) After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort). For that top-priority theme: Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate? Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary). Rewrite the theme entry with that improvement applied. Briefly explain why the revision is stronger and more useful for product or design teams. You could use â##â to label the sections (e.g., â##FLOWâ) more for your readability than for AI. At over 400 words, this Insights Synthesis prompt example is a detailed, structured prompt, but it isnât customized for you and your work. The intent wasnât to give you a specific prompt (the proverbial fish), but to show how you can use a prompt framework like WIRE+FRAME to create a customized, relevant prompt that will help AI augment your work (teaching you to fish). Keep in mind that prompt length isnât a common concern, but rather a lack of quality and structure is. As of the time of writing, AI models can easily process prompts that are thousands of words long. Not every prompt needs all the FRAME components; WIRE is often enough to get the job done. But when the work is strategic or highly contextual, pick components from FRAME â the extra details can make a difference. Together, WIRE+FRAME give you a detailed framework for creating a well-structured prompt, with the crucial components first, followed by optional components: WIRE builds a clear, focused prompt with role, input, rules, and expected output. FRAME adds refinement like tone, reusability, and iteration. Here are some scenarios and recommendations for using WIRE or WIRE+FRAME: Scenarios Description Recommended Simple, One-Off Analyses Quick prompting with minimal setup and no need for detailed process transparency. WIRE Tight Sprints or Hackathons Rapid turnarounds, and times you donât need embedded review and iteration loops. WIRE Highly Iterative Exploratory Work You expect to tweak results constantly and prefer manual control over each step. WIRE Complex Multi-Step Playbooks Detailed workflows that benefit from a standardized, repeatable, visible sequence. WIRE+FRAME Shared or Hand-Off Projects When different teams will rely on embedded clarification, memory, and consistent task flows for recurring analyses. WIRE+FRAME Built-In Quality Control You want the AI to flag top issues, self-critique, and refine, minimizing manual QC steps. WIRE+FRAME Prompting isnât about getting it right the first time. Itâs about designing the interaction and redesigning when needed. With WIRE+FRAME, youâre going beyond basic prompting and designing the interaction between you and AI. From Gut Feel To Framework: A Prompt Makeover Letâs compare the results of Kateâs first AI-augmented design sprint prompt (to synthesize customer feedback into design insights) with one based on the WIRE+FRAME prompt framework, with the same data and focusing on the top results: Original prompt: Read this customer feedback and tell me how we can improve our app for Gen Z users. Initial ChatGPT Results: Improve app reliability to reduce crashes and freezing. Provide better guidance or tutorials for financial tools like budgeting or goal setting. Enhance the transparency of Zelle transfers by showing confirmation messages. Speed up app loading and reduce lag on key actions. With this version, youâd likely need to go back and forth with follow-up questions, rewrite the output for clarity, and add structure before sharing with your team. WIRE+FRAME prompt above (with defined role, scope, rules, expected format, tone, flow, and evaluation loop). Initial ChatGPT Results: You can clearly see the very different results from the two prompts, both using the exact same data. While the first prompt returns a quick list of ideas, the detailed WIRE+FRAME version doesnât just summarize feedback but structures it. Themes are clearly labeled, supported by user quotes, mapped to customer journey stages, and prioritized by frequency, severity, and effort. The structured prompt results can be used as-is or shared without needing to reformat, rewrite, or explain them (see disclaimer below). The first prompt output needs massaging: itâs not detailed, lacks evidence, and would require several rounds of clarification to be actionable. The first prompt may work when the stakes are low and you are exploring. But when your prompt is feeding design, product, or strategy, structure comes to the rescue. Disclaimer: Know Your Data A well-structured prompt can make AI output more useful, but it shouldnât be the final word, or your single source of truth. AI models are powerful pattern predictors, not fact-checkers. If your data is unclear or poorly referenced, even the best prompt may return confident nonsense. Donât blindly trust what you see. Treat AI like a bright intern: fast, eager, and occasionally delusional. You should always be familiar with your data and validate what AI spits out. For example, in the WIRE+FRAME results above, AI rated the effort as low for financial tool onboarding. That could easily be a medium or high. Good prompting should be backed by good judgment. Try This Now Start by using the WIRE+FRAME framework to create a prompt that will help AI augment your work. You could also rewrite the last prompt you were not satisfied with, using the WIRE+FRAME, and compare the output. Feel free to use this simple tool to guide you through the framework. Methods: From Lone Prompts to a Prompt System Just as design systems have reusable components, your prompts can too. You can use the WIRE+FRAME framework to write detailed prompts, but you can also use the structure to create reusable components that are pre-tested, plug-and-play pieces you can assemble to build high-quality prompts faster. Each part of WIRE+FRAME can be transformed into a prompt component: small, reusable modules that reflect your teamâs standards, voice, and strategy. For instance, if you find yourself repeatedly using the same content for different parts of the WIRE+FRAME framework, you could save them as reusable components for you and your team. In the example below, we have two different reusable components for âW: Who & Whatâ â an insights analyst and an information architect. W: Who & What You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities. You are an experienced information architect specializing in organizing enterprise content on intranets. Your task is to reorganize the content and features into categories that reflect user goals, reduce cognitive load, and increase findability. Create and save prompt components and variations for each part of the WIRE+FRAME framework, allowing your team to quickly assemble new prompts by combining components when available, rather than starting from scratch each time. Behind The Prompts: Questions About Prompting Q: If I use a prompt framework like WIRE+FRAME every time, will the results be predictable? A: Yes and no. Yes, your outputs will be guided by a consistent set of instructions (e.g., Rules, Examples, Reference Voice / Style) that will guide the AI to give you a predictable format and style of results. And no, while the framework provides structure, it doesnât flatten the generative nature of AI, but focuses it on whatâs important to you. In the next article, we will look at how you can use this to your advantage to quickly reuse your best repeatable prompts as we build your AI assistant. Q: Could changes to AI models break the WIRE+FRAME framework? A: AI models are evolving more rapidly than any other technology weâve seen before â in fact, ChatGPT was recently updated to GPT-5 to mixed reviews. The update didnât change the core principles of prompting or the WIRE+FRAME prompt framework. With future releases, some elements of how we write prompts today may change, but the need to communicate clearly with AI wonât. Think of how you delegate work to an intern vs. someone with a few yearsâ experience: you still need detailed instructions the first time either is doing a task, but the level of detail may change. WIRE+FRAME isnât built only for todayâs models; the components help you clarify your intent, share relevant context, define constraints, and guide tone and format â all timeless elements, no matter how smart the model becomes. The skill of shaping clear, structured interactions with non-human AI systems will remain valuable. Q: Can prompts be more than text? What about images or sketches? A: Absolutely. With tools like GPT-5 and other multimodal models, you can upload screenshots, pictures, whiteboard sketches, or wireframes. These visuals become part of your Input Context or help define the Expected Output. The same WIRE+FRAME principles still apply: youâre setting context, tone, and format, just using images and text together. Whether your input is a paragraph or an image and text, youâre still designing the interaction. Have a prompt-related question of your own? Share it in the comments, and Iâll either respond there or explore it further in the next article in this series. From Designerly Prompting To Custom Assistants Good prompts and results donât come from using othersâ prompts, but from writing prompts that are customized for you and your context. The WIRE+FRAME framework helps with that and makes prompting a tool you can use to guide AI models like a creative partner instead of hoping for magic from a one-line request. Prompting uses the designerly skills you already use every day to collaborate with AI: Curiosity to explore what the AI can do and frame better prompts. Observation to detect bias or blind spots. Empathy to make machine outputs human. Critical thinking to verify and refine. Experiment & Iteration to learn by doing and improve the interaction over time. Growth Mindset to keep up with new technology like AI and prompting. Once you create and refine prompt components and prompts that work for you, make them reusable by documenting them. But wait, thereâs more â what if your best prompts, or the elements of your prompts, could live inside your own AI assistant, available on demand, fluent in your voice, and trained on your context? Thatâs where weâre headed next. In the next article, âDesign Your Own Design Assistantâ, weâll take what youâve learned so far and turn it into a Custom AI assistant (aka Custom GPT), a design-savvy, context-aware assistant that works like you do. Weâll walk through that exact build, from defining the assistantâs job description to uploading knowledge, testing, and sharing it with others. Resources GPT-5 Prompting Guide GPT-4.1 Prompting Guide Anthropic Prompt Engineering Prompt Engineering by Google Perplexity Webapp to guide you through the WIRE+FRAME framework
Designing For TV: The Evergreen Pattern That Shapes TV Experiences (Part 1)
TV interface design is a unique, fascinating, and often overlooked field. Itâs been guided by decades of evolution and innovation, yet still firmly constrained by its legacy. Follow Milan into the history, quirks, and unshakable rules that dictate how we control these devices.
Television sets have been the staple of our living rooms for decades. We watch, we interact, and we control, but how often do we design for them? TV design flew under my âradarâ for years, until one day I found myself in the deep, designing TV-specific user interfaces. Now, after gathering quite a bit of experience in the area, I would like to share my knowledge on this rather rare topic. If youâre interested in learning more about the user experience and user interfaces of television, this article should be a good starting point. Just like any other device or use case, TV has its quirks, specifics, and guiding principles. Before getting started, it will be beneficial to understand the core ins and outs. In Part 1, weâll start with a bit of history, take a close look at the fundamentals, and review the evolution of television. In Part 2, weâll dive into the depths of practical aspects of designing for TV, including its key principles and patterns. Letâs start with the two key paradigms that dictate the process of designing TV interfaces. Mind The Gap, Or The 10-foot-experience Firstly, we have the so-called â10-foot experience,â referring to the fact that interaction and consumption on TV happens from a distance of roughly three or more meters. This is significantly different than interacting with a phone or a computer and implies having some specific approaches in the TV user interface design. For example, weâll need to make text and user interface (UI) elements larger on TV to account for the bigger distance to the screen. Furthermore, weâll take extra care to adhere to contrast standards, primarily relying on dark interfaces, as light ones may be too blinding in darker surroundings. And finally, considering the laid-back nature of the device, weâll simplify the interactions. But the 10-foot experience is only one part of the equation. There wouldnât be a â10-foot experienceâ in the first place if there were no mediator between the user and the device, and if we didnât have something to interact through from a distance. There would be no 10-foot experience if there were no remote controllers. The Mediator The remote, the second half of the equation, is what allows us to interact with the TV from the comfort of the couch. Slower and more deliberate, this conglomerate of buttons lacks the fluid motion of a mouse, or the dexterity of fingers against a touchscreen â yet the capabilities of the remote should not be underestimated. Rudimentary as it is and with a limited set of functions, the remote allows for some interesting design approaches and can carry the weight of the modern TV along with its ever-growing requirements for interactivity. It underwent a handful of overhauls during the seventy years since its inception and was refined and made more ergonomic; however, there is a 40-year-old pattern so deeply ingrained in its foundation that nothing can change it. What if I told you that you could navigate TV interfaces and apps with a basic controller from the 1980s just as well as with the latest remote from Apple? Not only that, but any experience built around the six core buttons of a remote will be system-agnostic and will easily translate across platforms. This is the main point I will focus on for the rest of this article. Birth Of A Pattern As television sets were taking over peopleâs living rooms in the 1950s, manufacturers sought to upgrade and improve the user experience. The effort of walking up to the device to manually adjust some settings was eventually identified as an area for improvement, and as a result, the first television remote controllers were introduced to the market. Early Developments Preliminary iterations of the remotes were rather unique, and it took some divergence before we finally settled on a rectangular shape and sprinkled buttons on top. Take a look at the Zenith Flash-Matic, for example. Designed in the mid-1950s, this standout device featured a single button that triggered a directional lamp; by pointing it at specific corners of the TV set, viewers could control various functions, such as changing channels or adjusting the volume. While they were a far cry compared to their modern counterparts, devices like the Flash-Matic set the scene for further developments, and we were off to the races! As the designs evolved, the core functionality of the remote solidified. Gradually, remote controls became more than just simple channel changers, evolving into command centers for the expanding territory of home entertainment. Note: I will not go too much into history here â aside from some specific points that are of importance to the matter at hand â but if you have some time to spare, do look into the developmental history of television sets and remotes, itâs quite a fascinating topic. However, practical as they may have been, they were still considered a luxury, significantly increasing the prices of TV sets. As the 1970s were coming to a close, only around 17% of United States households had a remote controller for their TVs. Yet, things would change as the new decade rolled in. Button Mania Of The 1980s The eighties brought with them the Apple Macintosh, MTV, and Star Wars. It was a time of cultural shifts and technological innovation. Videocassette recorders (VCRs) and a multitude of other consumer electronics found their place in the living rooms of the world, along with TVs. These new devices, while enriching our media experiences, also introduced a few new design problems. Where there was once a single remote, now there were multiple remotes, and things were getting slowly out of hand. This marked the advent of universal remotes. Trying to hit many targets with one stone, the unwieldy universal remotes were humanityâs best solution for controlling a wider array of devices. And they did solve some of these problems, albeit in an awkward way. The complexity of universal remotes was a trade-off for versatility, allowing them to be programmed and used as a command center for controlling multiple devices. This meant transforming the relatively simple design of their predecessors into a beehive of buttons, prioritizing broader compatibility over elegance. On the other hand, almost as a response to the inconvenience of the universal remote, a different type of controller was conceived in the 1980s â one with a very basic layout and set of buttons, and which would leave its mark in both how we interact with the TV, and how our remotes are laid out. A device that would, knowingly or not, give birth to a navigational pattern that is yet to be broken â the NES controller. D-pad Dominance Released in 1985, the Nintendo Entertainment System (NES) was an instant hit. Having sold sixty million units around the world, it left an undeniable mark on the gaming console industry. The NES controller (which was not truly remote, as it ran a cable to the central unit) introduced the world to a deceptively simple control scheme. Consisting of six primary actions, it gave us the directional pad (the D-pad), along with two action buttons (A and B). Made in response to the bulky joystick, the cross-shaped cluster allowed for easy movement along two axes (up, down, left, and right). Charmingly intuitive, this navigational pattern would produce countless hours of gaming fun, but more importantly, its elementary design would âseep overâ into the wider industry â the D-pad, along with the two action buttons, would become the very basis on which future remotes would be constructed. The world continued spinning madly on, and what was once a luxury became commonplace. By the end of the decade, TV remotes were more integral to the standard television experience, and more than two-thirds of American TV owners had some sort of a remote. The nineties rolled in with further technological advancements. TV sets became more robust, allowing for finer tuning of their settings. This meant creating interfaces through which such tasks could be accomplished, and along with their master sets, remotes got updated as well. Gone were the bulky rectangular behemoths of the eighties. As ergonomics took precedence, they got replaced by comfortably contoured devices that better fit their usersâ hands. Once conglomerations of dozens of uniform buttons, these contemporary remotes introduced different shapes and sizes, allowing for recognition simply through touch. Commands were being clustered into sensible groups along the body of the remote, and within those button groups, a familiar shape started to emerge. Gradually, the D-pad found its spot on our TV remotes. As the evolution of these devices progressed, it became even more deeply embedded at the core of their interactivity. Set-top boxes and smart features emerged in the 2000s and 2010s, and TV technology continued to advance. Along the way, many bells and whistles were introduced. TVs got bigger, brighter, thinner, yet their essence remained unchanged. In the years since their inception, remotes were innovated upon, but all the undertakings circle back to the core principles of the NES controller. Future endeavours never managed to replace, but only to augment and reinforce the pattern. The Evergreen Pattern In 2013, LG introduced their Magic remote (âSo magically simple, the kids will be showing you how to use it!â). This uniquely shaped device enabled motion controls on LG TV sets, allowing users to point and click similar to a computer mouse. Having a pointer on the screen allowed for much more flexibility and speed within the system, and the remote was well-received and praised as one of the best smart TV remotes. Innovating on tradition, this device introduced new features and fresh perspectives to the world of TV. But if we look at the device itself, weâll see that, despite its differences, it still retains the D-pad as a means of interaction. It may be argued that LG never set out to replace the directional pad, and as it stands, regardless of their intent, they only managed to augment it. For an even better example, letâs examine Apple TVâs second-generation remotes (the first-generation Siri remote). Being the industry disruptors, Apple introduced a touchpad to the top half of the remote. The glass surface provided briskness and precision to the experience, enabling multi-touch gestures, swipe navigation, and quick scrolling. This quality of life upgrade was most noticeable when typing with the horizontal on-screen keyboards, as it allowed for smoother and quicker scrolling from A to Z, making for a more refined experience. While at first glance it may seem Apple removed the directional buttons, the fact is that the touchpad is simply a modernised take on the pattern, still abiding by the same four directions a classic D-pad does. You could say itâs a D-pad with an extra layer of gimmick. Furthermore, the touchpad didnât really sit well with the user base, along with the fact that the remoteâs ergonomics were a bit iffy. So instead of pushing the boundaries even further with their third generation of remotes, Apple did a complete 180, re-introducing the classic D-pad cluster while keeping the touch capabilities from the previous generation (the touch-enabled clickpad lets you select titles, swipe through playlists, and use a circular gesture on the outer ring to find just the scene youâre looking for). Now, why canât we figure out a better way to navigate TVs? Does that mean we shouldnât try to innovate? We can argue that using motion controls and gestures is an obvious upgrade to interacting with a TV. And weâd be right⊠in principle. These added features are more complex and costly to produce, but more importantly, while it has been upgraded with bits and bobs, the TV is essentially a legacy system. And itâs not only that. While touch controls are a staple of interaction these days, adding them without thorough consideration can reduce the usability of a remote. Pitfalls Of Touch Controls Modern car dashboards are increasingly being dominated by touchscreens. While they may impress at auto shows, their real-world usability is often compromised. Driving demands constant focus and the ability to adapt and respond to ever-changing conditions. Any interface that requires taking your eyes off the road for more than a moment increases the risk of accidents. Thatâs exactly where touch controls fall short. While they may be more practical (and likely cheaper) for manufacturers to implement, theyâre often the opposite for the end user. Unlike physical buttons, knobs, and levers, which offer tactile landmarks and feedback, touch interfaces lack the ability to be used by feeling alone. Even simple tasks like adjusting the volume of the radio or the climate controls often involve gestures and nested menus, all performed on a smooth glass surface that demands visual attention, especially when fine-tuning. Fortunately, the upcoming 2026 Euro NCAP regulations will encourage car manufacturers to reintroduce physical controls for core functions, reducing driver distraction and promoting safer interaction. Similarly (though far less critically), sleek, buttonless TV remote controls may feel modern, but they introduce unnecessary abstraction to a familiar set of controls. Physical buttons with distinct shapes and positioning allow users to navigate by memory and touch, even in the dark. Thatâs not outdated â itâs a deeper layer of usability that modern design should respect, not discard. And this is precisely why Apple reworked the Apple TV third-generation remote the way it is now, where the touch area at the top disappeared. Instead, the D-pad again had clearly defined buttons, and at the same time, the D-pad could also be extended (not replaced) to accept some touch gestures. The Legacy Of TV Letâs take a look at an old on-screen keyboard. The Legend of Zelda, released in 1986, allowed players to register their names in-game. There are even older games with the same feature, but thatâs beside the point. Using the NES controller, the players would move around the keyboard, entering their moniker character by character. Now letâs take a look at a modern iteration of the on-screen keyboard. Notice the difference? Or, to phrase it better: do you notice the similarities? Throughout the years, weâve introduced quality of life improvements, but the core is exactly the same as it was forty years ago. And it is not the lack of innovation or bad remotes that keep TV deeply ingrained in its beginnings. Itâs simply that itâs the most optimal way to interact given the circumstances. Laying It All Out Just like phones and computers, TV layouts are based on a grid system. However, this system is a lot more apparent and rudimentary on TV. Taking a look at a standard TV interface, weâll see that it consists mainly of horizontal and vertical lists, also known as shelves. These grids may be populated with cards, characters of the alphabet, or anything else, essentially, and upon closer examination, weâll notice that our movement is restricted by a few factors: There is no pointer for our eyes to follow, like there would be on a computer. There is no way to interact directly with the display like we would with a touchscreen. For the purposes of navigating with a remote, a focus state is introduced. This means that an element will always be highlighted for our eyes to anchor, and it will be the starting point for any subsequent movement within the interface. Simplified TV UI demonstrating a focus state along with sequential movement from item to item within a column. Moreover, starting from the focused element, we can notice that the movement is restricted to one item at a time, almost like skipping stones. Navigating linearly in such a manner, if we wanted to move within a list of elements from element #1 to element #5, weâd have to press a directional button four times. Simplified TV UI demonstrating a focus state along with sequential movement from item to item within a row. To successfully navigate such an interface, we need the ability to move left, right, up, and down â we need a D-pad. And once weâve landed on our desired item, there needs to be a way to select it or make a confirmation, and in the case of a mistake, we need to be able to go back. For the purposes of those two additional interactions, weâd need two more buttons, OK and back, or to make it more abstract, weâd need buttons A and B. So, to successfully navigate a TV interface, we need only a NES controller. Yes, we can enhance it with touchpads and motion gestures, augment it with voice controls, but this unshakeable foundation of interaction will remain as the very basic level of inherent complexity in a TV interface. Reducing it any further would significantly impair the experience, so all weâve managed to do throughout the years is to only build upon it. The D-pad and buttons A and B survived decades of innovation and technological shifts, and chances are theyâll survive many more. By understanding and respecting this principle, you can design intuitive, system-agnostic experiences and easily translate them across platforms. Knowing you canât go simpler than these six buttons, youâll easily build from the ground up and attach any additional framework-bound functionality to the time-tested core. And once you get the grip of these paradigms, youâll get into mapping and re-mapping buttons depending on context, and understand just how far you can go when designing for TV. Youâll be able to invent new experiences, conduct experiments, and challenge the patterns. But that is a topic for a different article. Closing Thoughts While designing for TV almost exclusively during the past few years, I was also often educating the stakeholders on the very principles outlined in this article. Trying to address their concerns about different remotes working slightly differently, I found respite in the simplicity of the NES controller and how it got the point across in an understandable way. Eventually, I expanded my knowledge by looking into the developmental history of the remote and was surprised that my analogy had backing in history. This is a fascinating niche, and thereâs a lot more to share on the topic. Iâm glad we started! Itâs vital to understand the fundamental âinsâ and âoutsâ of any venture before getting practical, and TV is no different. Now that you understand the basics, go, dig in, and break some ground. Having covered the underlying interaction patterns of TV experiences in detail, itâs time to get practical. In Part 2, weâll explore the building blocks of the 10-foot experience and how to best utilize them in your designs. Weâll review the TV design fundamentals (the screen, layout, typography, color, and focus/focus styles), and the common TV UI components (menus, âshelves,â spotlights, search, and more). I will also show you how to start thinking beyond the basics and to work with â and around â the constraints which we abide by when designing for TV. Stay tuned! Further Reading âThe 10 Foot Experience,â by Robert Stulle (Edenspiekermann) Every user interface should offer effortless navigation and control. For the 10-foot experience, this is twice as important; with only up, down, left, right, OK and back as your input vocabulary, things had better be crystal clear. You want to sit back and enjoy without having to look at your remote â your thumb should fly over the buttons to navigate, select, and activate. âIntroduction to the 10-Foot Experience for Windows Game Developersâ (Microsoft Learn) A growing number of people are using their personal computers in a completely new way. When you think of typical interaction with a Windows-based computer, you probably envision sitting at a desk with a monitor, and using a mouse and keyboard (or perhaps a joystick device); this is referred to as the 2-foot experience. But there's another trend which you'll probably start hearing more about: the 10-foot experience, which describes using your computer as an entertainment device with output to a TV. This article introduces the 10-foot experience and explores the list of things that you should consider first about this new interaction pattern, even if you aren't expecting your game to be played this way. â10-foot user interfaceâ (Wikipedia) In computing, a 10-foot user interface, or 3-meter UI, is a graphical user interface designed for televisions (TV). Compared to desktop computer and smartphone user interfaces, it uses text and other interface elements that are much larger in order to accommodate a typical television viewing distance of 10 feet (3.0 meters); in reality, this distance varies greatly between households, and additionally, the limitations of a television's remote control necessitate extra user experience considerations to minimize user effort. âThe Television Remote Control: A Brief History,â by Mary Bellis (ThoughtCo) The first TV remote, the Lazy Bone, was made in 1950 and used a cable. In 1955, the Flash-matic was the first wireless remote, but it had issues with sunlight. Zenith's Space Command in 1956 used ultrasound and became the popular choice for over 25 years. âThe History of The TV Remote,â by Remy Millisky (Grunge) The first person to create and patent the remote control was none other than Nikola Tesla, inventor of the Tesla coil and numerous electronic systems. He patented the idea in 1893 to drive boats remotely, far before televisions were invented. Since then, remotes have come a long way, especially for the television, changing from small boxes with long wires to the wireless universal remotes that many people have today. How has the remote evolved over time? âNintendo Entertainment System controllerâ (Nintendo Wiki) The Nintendo Entertainment System controller is the main controller for the NES. While previous systems had used joysticks, the NES controller provided a directional pad (the D-pad was introduced in the Game & Watch version of Donkey Kong). âWhy Touchscreens In Cars Donât Work,â by Jacky Li (published in June 2018) Observing the behaviour of 21 drivers has made me realize whatâs wrong with automotive UX. [...] While I was excited to learn more about the Tesla Model X, it slowly became apparent to me that the driverâs eyes were more glued to the screen than the road. Something about interacting with a touchscreen when driving made me curious to know: just how distracting are they? âEurope Is Requiring Physical Buttons For Cars To Get Top Safety Marks,â by Jason Torchinsky (published in March 2024) The overuse of touchscreens is an industry-wide problem, with almost every vehicle-maker moving key controls onto central touchscreens, obliging drivers to take their eyes off the road and raising the risk of distraction crashes. New Euro NCAP tests due in 2026 will encourage manufacturers to use separate, physical controls for basic functions in an intuitive manner, limiting eyes-off-road time and therefore promoting safer driving.
Optimizing PWAs For Different Display Modes
Progressive Web Apps (PWAs) are a great way to make apps built for the web feel native, but in moving away from a browser environment, we can introduce usability issues. This article covers how we can modify our app depending on what display mode is applied to mitigate these issues.
Progressive web apps (PWA) are a fantastic way to turn web applications into native-like, standalone experiences. They bridge the gap between websites and native apps, but this transformation can be prone to introducing design challenges that require thoughtful consideration. We define our PWAs with a manifest file. In our PWAâs manifest, we can select from a collection of display modes, each offering different levels of browser interface visibility: fullscreen: Hides all browser UI, using the entire display. standalone: Looks like a native app, hiding browser controls but keeping system UI. minimal-ui: Shows minimal browser UI elements. browser: Standard web browser experience with full browser interface. Oftentimes, we want our PWAs to feel like apps rather than a website in a browser, so we set the display manifest member to one of the options that hides the browserâs interface, such as fullscreen or standalone. This is fantastic for helping make our applications feel more at home, but it can introduce some issues we wouldnât usually consider when building for the web. Itâs easy to forget just how much functionality the browser provides to us. Things like forward/back buttons, the ability to refresh a page, search within pages, or even manipulate, share, or copy a pageâs URL are all browser-provided features that users can lose access to when the browserâs UI is hidden. There is also the case of things that we display on websites that donât necessarily translate to app experiences. Imagine a user deep into a form with no back button, trying to share a product page without the ability to copy a URL, or hitting a bug with no refresh button to bail them out! Much like how we make different considerations when designing for the web versus designing for print, we need to make considerations when designing for independent experiences rather than browser-based experiences by tailoring the content and user experience to the medium. Thankfully, weâre provided with plenty of ways to customise the web. Using Media Queries To Target Display Modes We use media queries all the time when writing CSS. Whether itâs switching up styles for print or setting breakpoints for responsive design, theyâre commonplace in the web developerâs toolkit. Each of the display modes discussed previously can be used as a media query to alter the appearance of documents depending. Media queries such as @media (min-width: 1000px) tend to get the most use for setting breakpoints based on the viewport size, but theyâre capable of so much more. They can handle print styles, device orientation, contrast preferences, and a whole ton more. In our case, weâre interested in the display-mode media feature. Display mode media queries correspond to the current display mode. Note: While we may set display modes in our manifest, the actual display mode may differ depending on browser support. These media queries directly reference the current mode: @media (display-mode: standalone) will only apply to pages set to standalone mode. @media (display-mode: fullscreen) applies to fullscreen mode. It is worth noting that this also applies when using the Fullscreen API. @media (display-mode: minimal-ui) applies to minimal UI mode. @media (display-mode: browser) applies to standard browser mode. It is also worth keeping an eye out for the window-controls-overlay and tabbed display modes. At the time of writing, these two display modes are experimental and can be used with display_override. display-override is a member of our PWAâs manifest, like display, but provides some extra options and power. display has a predetermined fallback chain (fullscreen -> standalone -> minimal-ui -> browser) that we canât change, but display-override allows setting a fallback order of our choosing, like the following: "display_override": ["fullscreen", "minimal-ui"] window-controls-overlay can only apply to PWAs running on a desktop operating system. It makes the PWA take up the entire window, with window control buttons appearing as an overlay. Meanwhile, tabbed is relevant when there are multiple applications within a single window. In addition to these, there is also the picture-in-picture display mode that applies to (you guessed it) picture-in-picture modes. We use these media queries exactly as we would any other media query. To show an element with the class .pwa-only when the display mode is standalone, we could do this: .pwa-only { display: none; } @media (display-mode: standalone) { .pwa-only { display: block; } } If we wanted to show the element when the display mode is standalone or minimal-ui, we could do this: @media (display-mode: standalone), (display-mode: minimal-ui) { .pwa-only { display: block; } } As great as it is, sometimes CSS isnât enough. In those cases, we can also reference the display mode and make necessary adjustments with JavaScript: const isStandalone = window.matchMedia("(display-mode: standalone)").matches; // Listen for display mode changes window.matchMedia("(display-mode: standalone)").addEventListener("change", (e) => { if (e.matches) { // App is now in standalone mode console.log("Running as PWA"); } }); Practical Applications Now that we know how to make display modifications depending on whether users are using our web app as a PWA or in a browser, we can have a look at how we might put these newly learnt skills to use. Tailoring Content For PWA Users Users who have an app installed as a PWA are already converted, so you can tweak your app to tone down the marketing speak and focus on the user experience. Since these users have demonstrated commitment by installing your app, they likely donât need promotional content or installation prompts. Display More Options And Features You might need to directly expose more things in PWA mode, as people wonât be able to access the browserâs settings as easily when the browser UI is hidden. Features like changing font sizing, switching between light and dark mode, bookmarks, sharing, tabs, etc., might need an in-app alternative. Platform-Appropriate Features There are features you might not want on your web app because they feel out of place, but that you might want on your PWA. A good example is the bottom navigation bar, which is common in native mobile apps thanks to the easier reachability it provides, but uncommon on websites. People sometimes print websites, but they very rarely print apps. Consider whether features like print buttons should be hidden in PWA mode. Install Prompts A common annoyance is a prompt to install a site as a PWA appearing when the user has already installed the site. Ideally, the browser will provide an install prompt of its own if our PWA is configured correctly, but not all browsers do, and it can be finicky. MDN has a fantastic guide on creating a custom button to trigger the installation of a PWA, but it might not fit our needs. We can improve things by hiding install prompts with our media query or detecting the current display mode with JavaScript and forgoing triggering popups in the first place. We could even set this up as a reusable utility class so that anything we donât want to be displayed when the app is installed as a PWA can be hidden with ease. /* Utility class to hide elements in PWA mode */ .hide-in-pwa { display: block; } @media (display-mode: standalone), (display-mode: minimal-ui) { .hide-in-pwa { display: none !important; } } Then in your HTML: <div class="install-prompt hide-in-pwa"> <button>Install Our App</button> </div> <div class="browser-notice hide-in-pwa"> <p>For the best experience, install this as an app!</p> </div> We could also do the opposite and create a utility class to make elements only show when in a PWA, as we discussed earlier. Strategic Use Of Scope And Start URL Another way to hide content from your site is to set the scope and start_url properties. These arenât using media queries as weâve discussed, but should be considered as ways to present different content depending on whether a site is installed as a PWA. Here is an example of a manifest using these properties: { "name": "Example PWA", "scope": "/dashboard/", "start_url": "/dashboard/index.html", "display": "standalone", "icons": [ { "src": "icon.png", "sizes": "192x192", "type": "image/png" } ] } scope here defines the top level of the PWA. When users leave the scope of your PWA, theyâll still have an app-like interface but gain access to browser UI elements. This can be useful if youâve got certain parts of your app that you still want to be part of the PWA but which arenât necessarily optimised or making the necessary considerations. start_url defines the URL a user will be presented with when they open the application. This is useful if, for example, your app has marketing content at example.com and a dashboard at example.com/dashboard/index.html. It is likely that people who have installed the app as a PWA donât need the marketing content, so you can set the start_url to /dashboard/index.html so the app starts on that page when they open the PWA. Enhanced Transitions View transitions can feel unfamiliar, out of place, and a tad gaudy on the web, but are a common feature of native applications. We can set up PWA-only view transitions by wrapping the relevant CSS appropriately: @media (display-mode: standalone) { @view-transition { navigation: auto; } } If youâre really ambitious, you could also tweak the design of a site entirely to fit more closely with native design systems when running as a PWA by pairing a check for the display mode with a check for the device and/or browser in use as needed. Browser Support And Testing Browser support for display mode media queries is good and extensive. However, itâs worth noting that Firefox doesnât have PWA support, and Firefox for Android only displays PWAs in browser mode, so you should make the necessary considerations. Thankfully, progressive enhancement is on our side. If weâre dealing with a browser lacking support for PWAs or these media queries, weâll be treated to graceful degradation. Testing PWAs can be challenging because every device and browser handles them differently. Each display mode behaves slightly differently in every browser and OS combination. Unfortunately, I donât have a silver bullet to offer you with regard to this. Browsers donât have a convenient way to simulate display modes for testing, so youâll have to test out your PWA on different devices, browsers, and operating systems to be sure everything works everywhere it should, as it should. Recap Using a PWA is a fundamentally different experience from using a web app in the browser, so considerations should be made. display-mode media queries provide a powerful way to create truly adaptive Progressive Web Apps that respond intelligently to their installation and display context. By leveraging these queries, we can do the following: Hide redundant installation prompts for users who have already installed the app, Provide appropriate navigation aids when making browser controls unavailable, Tailor content and functionality to match user expectations in different contexts, Create more native-feeling experiences that respect platform conventions, and Progressively enhance the experience for committed users. The key is remembering that PWA users in standalone mode have different needs and expectations than standard website visitors. By detecting and responding to display modes, we can create experiences that feel more polished, purposeful, and genuinely app-like. As PWAs continue to mature, thoughtful implementations and tailoring will become increasingly important for creating truly compelling app experiences on the web. If youâre itching for even more information and PWA tips and tricks, check out Ankita Masandâs âExtensive Guide To Progressive Web Applicationsâ. Further Reading On SmashingMag âCreating A Magento PWA: Customizing Themes vs. Coding From Scratchâ, Alex Husar âHow To Optimize Progressive Web Apps: Going Beyond The Basicsâ, Gert Svaiko âHow To Decide Which PWA Elements Should Stickâ, Suzanne Scacca âUniting Web And Native Apps With 4 Unknown JavaScript APIsâ, Juan Diego RodrĂguez
A Week In The Life Of An AI-Augmented Designer
If you are new to using AI in design or curious about integrating AI into your UX process without losing your human touch, this article offers a grounded, day-by-day look at introducing AI into your design workflow.
Artificial Intelligence isnât new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday life. Suddenly, interacting with a machine didnât feel technical â it felt conversational. Just this March, ChatGPT overtook Instagram and TikTok as the most downloaded app in the world. That level of adoption shows that millions of everyday users, not just developers or early adopters, are comfortable using AI in casual, conversational ways. People are using AI not just to get answers, but to think, create, plan, and even to help with mental health and loneliness. In the past two and a half years, people have moved through the KĂŒbler-Ross Change Curve â only instead of grief, itâs AI-induced uncertainty. UX designers, like Kate (who youâll meet shortly), have experienced something like this: Denial: âAI canât design like a human; it wonât affect my workflow.â Anger: âAI will ruin creativity. Itâs a threat to our craft.â Bargaining: âOkay, maybe just for the boring tasks.â Depression: âI canât keep up. Whatâs the future of my skills?â Acceptance: âAlright, AI can free me up for more strategic, human work.â As designers move into experimentation, theyâre not asking, Can I use AI? but How might I use it well?. Using AI isnât about chasing the latest shiny object but about learning how to stay human in a world of machines, and use AI not as a shortcut, but as a creative collaborator. It isnât about finding, bookmarking, downloading, or hoarding prompts, but experimenting and writing your own prompts. To bring this to life, weâll follow Kate, a mid-level designer at a FinTech company, navigating her first AI-augmented design sprint. Youâll see her ups and downs as she experiments with AI, tries to balance human-centered skills with AI tools, when she relies on intuition over automation, and how she reflects critically on the role of AI at each stage of the sprint. The next two planned articles in this series will explore how to design prompts (Part 2) and guide you through building your own AI assistant (aka CustomGPT; Part 3). Along the way, weâll spotlight the designerly skills AI canât replicate like curiosity, empathy, critical thinking, and experimentation that will set you apart in a world where automation is easy, but people and human-centered design matter even more. Note: This article was written by a human (with feelings, snacks, and deadlines). The prompts are real, the AI replies are straight from the source, and no language models were overworked â just politely bossed around. All em dashes are the handiwork of MS Wordâs autocorrect â not AI. Kate is fictional, but her week is stitched together from real tools, real prompts, real design activities, and real challenges designers everywhere are navigating right now. She will primarily be using ChatGPT, reflecting the popularity of this jack-of-all-trades AI as the place many start their AI journeys before branching out. If you stick around to the end, youâll find other AI tools that may be better suited for different design sprint activities. Due to the pace of AI advances, your outputs may vary (YOMV), possibly by the time you finish reading this sentence. Cautionary Note: AI is helpful, but not always private or secure. Never share sensitive, confidential, or personal information with AI tools â even the helpful-sounding ones. When in doubt, treat it like a coworker who remembers everything and may not be particularly good at keeping secrets. Prologue: Meet Kate (As She Preps For The Upcoming Week) Kate stared at the digital mountain of feedback on her screen: transcripts, app reviews, survey snippets, all waiting to be synthesized. Deadlines loomed. Her calendar was a nightmare. Meanwhile, LinkedIn was ablaze with AI hot takes and success stories. Everyone seemed to have found their âAI grooveâ â except her. She wasnât anti-AI. She just hadnât figured out how it actually fit into her work. She had tried some of the prompts she saw online, played with some AI plugins and extensions, but it felt like an add-on, not a core part of her design workflow. Her team was focusing on improving financial confidence for Gen Z users of their FinTech app, and Kate planned to use one of her favorite frameworks: the Design Sprint, a five-day, high-focus process that condenses months of product thinking into a single week. Each day tackles a distinct phase: Understand, Sketch, Decide, Prototype, and Test. All designed to move fast, make ideas tangible, and learn from real users before making big bets. This time, she planned to experiment with a very lightweight version of the design sprint, almost âsolo-ishâ since her PM and engineer were available for check-ins and decisions, but not present every day. That gave her both space and a constraint, and made it the perfect opportunity to explore how AI could augment each phase of the sprint. She decided to lean on her designerly behavior of experimentation and learning and integrate AI intentionally into her sprint prep, using it as both a creative partner and a thinking aid. Not with a rigid plan, but with a working hypothesis that AI would at the very least speed her up, if nothing else. She wouldnât just be designing and testing a prototype, but prototyping and testing what it means to design with AI, while still staying in the driverâs seat. Follow Kate along her journey through her first AI-powered design sprint: from curiosity to friction and from skepticism to insight. Monday: Understanding the Problem (aka: Kate Vs. Digital Pile Of Notes) The first day of a design sprint is spent understanding the user, their problems, business priorities, and technical constraints, and narrowing down the problem to solve that week. This morning, Kate had transcripts from recent user interviews and customer feedback from the past year from app stores, surveys, and their customer support center. Typically, she would set aside a few days to process everything, coming out with glazed eyes and a few new insights. This time, she decided to use ChatGPT to summarize that data: âRead this customer feedback and tell me how we can improve financial literacy for Gen Z in our app.â ChatGPTâs outputs were underwhelming to say the least. Disappointed, she was about to give up when she remembered an infographic about good prompting that she had emailed herself. She updated her prompt based on those recommendations: Defined a role for the AI (âproduct strategistâ), Provided context (user group and design sprint objectives), and Clearly outlined what she was looking for (financial literacy related patterns in pain points, blockers, confusion, lack of confidence; synthesis to identify top opportunity areas). By the time she Aero-pressed her next cup of coffee, ChatGPT had completed its analysis, highlighting blockers like jargon, lack of control, fear of making the wrong choice, and need for blockchain wallets. Wait, what? That last one felt off. Kate searched her sources and confirmed her hunch: AI hallucination! Despite the best of prompts, AI sometimes makes things up based on trendy concepts from its training data rather than actual data. Kate updated her prompt with constraints to make ChatGPT only use data she had uploaded, and to cite examples from that data in its results. 18 seconds later, the updated results did not mention blockchain or other unexpected results. By lunch, Kate had the makings of a research summary that would have taken much, much longer, and a whole lot of caffeine. That afternoon, Kate and her product partner plotted the pain points on the Gen Z app journey. The emotional mapping highlighted the most critical moment: the first step of a financial decision, like setting a savings goal or choosing an investment option. That was when fear, confusion, and lack of confidence held people back. AI synthesis combined with human insight helped them define the problem statement as: âHow might we help Gen Z users confidently take their first financial action in our app, in a way that feels simple, safe, and puts them in control?â Kateâs Reflection As she wrapped up for the day, Kate jotted down her reflections on her first day as an AI-augmented designer: Thereâs nothing like learning by doing. Iâve been reading about AI and tinkering around, but took the plunge today. Turns out AI is much more than a tool, but I wouldnât call it a co-pilot. Yet. I think itâs like a sharp intern: it has a lot of information, is fast, eager to help, but it lacks context, needs supervision, and can surprise you. You have to give it clear instructions, double-check its work, and guide and supervise it. Oh, and maintain boundaries by not sharing anything I wouldnât want others to know. Today was about listening â to users, to patterns, to my own instincts. AI helped me sift through interviews fast, but I had to stay curious to catch what it missed. Some quotes felt too clean, like the edges had been smoothed over. Thatâs where observation and empathy kicked in. I had to ask myself: whatâs underneath this summary? Critical thinking was the designerly skill I had to exercise most today. It was tempting to take the AIâs synthesis at face value, but I had to push back by re-reading transcripts, questioning assumptions, and making sure I wasnât outsourcing my judgment. Turns out, the thinking part still belongs to me. Tuesday: Sketching (aka: Kate And The Sea of OKish Ideas) Day 2 of a design sprint focuses on solutions, starting by remixing and improving existing ideas, followed by people sketching potential solutions. Optimistic, yet cautious after her experience yesterday, Kate started thinking about ways she could use AI today, while brewing her first cup of coffee. By cup two, she was wondering if AI could be a creative teammate. Or a creative intern at least. She decided to ask AI for a list of relevant UX patterns across industries. Unlike yesterdayâs complex analysis, Kate was asking for inspiration, not insight, which meant she could use a simpler prompt: âGive me 10 unique examples of how top-rated apps reduce decision anxiety for first-time users â from FinTech, health, learning, or ecommerce.â She received her results in a few seconds, but there were only 6, not the 10 she asked for. She expanded her prompt for examples from a wider range of industries. While reviewing the AI examples, Kate realized that one had accessibility issues. To be fair, the results met Kateâs ask since she had not specified accessibility considerations. She then went pre-AI and brainstormed examples with her product partner, coming up with a few unique local examples. Later that afternoon, Kate went full human during Crazy 8s by putting a marker to paper and sketching 8 ideas in 8 minutes to rapidly explore different directions. Wondering if AI could live up to its generative nature, she uploaded pictures of her top 3 sketches and prompted AI to act as âa product design strategist experienced in Gen Z behavior, digital UX, and behavioral scienceâ, gave it context about the problem statement, stage in the design sprint, and explicitly asked AI the following: Analyze the 3 sketch concepts and identify core elements or features that resonated with the goal. Generate 5 new concept directions, each of which should: Address the original design sprint challenge. Reflect Gen Z design language, tone, and digital behaviors. Introduce a unique twist, remix, or conceptual inversion of the ideas in the sketches. For each concept, provide: Name (e.g., âMonopoly Mode,â âSmart Startâ); 1â2 sentence concept summary; Key differentiator from the original sketches; Design tone and/or behavioral psychology technique used. The results included ideas that Kate and her product partner hadnât considered, including a progress bar that started at 20% (to build confidence), and a sports-like âstock bracketâ for first-time investors. Not bad, thought Kate, as she cherry-picked elements, combined and built on these ideas in her next round of sketches. By the end of the day, they had a diverse set of sketched solutions â some original, some AI-augmented, but all exploring how to reduce fear, simplify choices, and build confidence for Gen Z users taking their first financial step. With five concept variations and a few rough storyboards, Kate was ready to start converging on day 3. Kateâs Reflection Today was creatively energizing yet a little overwhelming! I leaned hard on AI to act as a creative teammate. It delivered a few unexpected ideas and remixed my Crazy 8s into variations I never wouldâve thought of! It also reinforced the need to stay grounded in the human side of design. AI was fast â too fast, sometimes. It spit out polished-sounding ideas that sounded right, but I had to slow down, observe carefully, and ask: Does this feel right for our users? Would a first-time user feel safe or intimidated here? Critical thinking helped me separate what mattered from what didnât. Empathy pulled me back to what Gen Z users actually said, and kept their voices in mind as I sketched. Curiosity and experimentation were my fuel. I kept tweaking prompts, remixing inputs, and seeing how far I could stretch a concept before it broke. Visual communication helped translate fuzzy AI ideas into something I could react to â and more importantly, test. Wednesday: Deciding (aka Kate Tries to Get AI to Pick a Side) Design sprint teams spend Day 3 critiquing each of their potential solutions to shortlist those that have the best chance of achieving their long-term goal. The winning scenes from the sketches are then woven into a prototype storyboard. Design sprint Wednesdays were Kateâs least favorite day. After all the generative energy during Sketching Tuesday, today, she would have to decide on one clear solution to prototype and test. She was unsure if AI would be much help with judging tradeoffs or narrowing down options, and it wouldnât be able to critique like a team. Or could it? Kate reviewed each of the five concepts, noting strengths, open questions, and potential risks. Curious about how AI would respond, she uploaded images of three different design concepts and prompted ChatGPT for strengths and weaknesses. AIâs critique was helpful in summarizing the pros and cons of different concepts, including a few points she had not considered â like potential privacy concerns. She asked a few follow-up questions to confirm the actual reasoning. Wondering if she could simulate a team critique by prompting ChatGPT differently, Kate asked it to use the 6 thinking hats technique. The results came back dense, overwhelming, and unfocused. The AI couldnât prioritize, and it couldnât see the gaps Kate instinctively noticed: friction in onboarding, misaligned tone, unclear next steps. In that moment, the promise of AI felt overhyped. Kate stood up, stretched, and seriously considered ending her experiments with the AI-driven process. But she paused. Maybe the problem wasnât the tool. Maybe it was how she was using it. She made a note to experiment when she wasnât on a design sprint clock. She returned to her sketches, this time laying them out on the wall. No screens, no prompts. Just markers, sticky notes, and Sharpie scribbles. Human judgment took over. Kate worked with her product partner to finalize the solution to test on Friday and spent the next hour storyboarding the experience in Figma. Kate re-engaged with AI as a reviewer, not a decider. She prompted it for feedback on the storyboard and was surprised to see it spit out detailed design, content, and micro-interaction suggestions for each of the steps of the storyboarded experience. A lot of food for thought, but sheâd have to judge what mattered when she created her prototype. But that wasnât until tomorrow! Kateâs Reflection AI exposed a few of my blind spots in the critique, which was good, but it basically pointed out that multiple options âcould workâ. I had to rely on my critical thinking and instincts to weigh options logically, emotionally, and contextually in order to choose a direction that was the most testable and aligned with the user feedback from Day 1. I was also surprised by the suggestions it came up with while reviewing my final storyboard, but I will need a fresh pair of eyes and all the human judgement I can muster tomorrow. Empathy helped me walk through the flow like I was a new user. Visual communication helped pull it all together by turning abstract steps into a real storyboard for the team to see instead of imagining. TO DO: Experiment prompting around the 6 Thinking Hats for different perspectives. Thursday: Prototype (aka Kate And Faking It) On Day 4, the team usually turns the storyboard from the previous day into a prototype that can be tested with users on Day 5. The prototype doesnât need to be fully functional; a simulated experience is sufficient to gather user feedback. Kateâs prototype day often consisted of marathon Figma Design sessions and late-night pizza dinners. She was hoping AI would change that today. She fed yesterdayâs storyboard to ChatGPT and asked it for screens. It took a while to generate, but she was excited to see a screen flow gradually appear on her screen, except that it had 3 Ÿ screens, instead of the 6 frames from her storyboard, as you can see in the image below. She tried different prompts but couldnât get a complete flow. She was excited when it offered to create a Figma file. She waited. And waited. And reminded it that she was right there waiting. It finally gave her a link â a link that didnât work. That made her question why she needed prototype screens in Figma at all. Could one of those no-code tools do the job? Kate remembered that she had just received beta access to Figma Make (Figmaâs AI prompt-to-prototype tool) and decided to put it to the test. She shared her storyboard, added context about the prototype for a design sprint, the problem, and a brief description of the audience. In a few seconds, she saw code being generated and a running description on the left. It seemed like it would take some time, so she headed to the cafeteria for lunch. When Kate returned, she was surprised to see Figma Make had created an app along with a detailed explanation: âIâve created a complete finance app prototype with five key screens that guide users through setting financial goals in a friendly, low-pressure way. The app features smooth animations with Framer Motion, a progress indicator to show users where they are in the journey, and emotionally supportive visuals and copy throughout the experience. Each screen serves a specific purpose: welcoming users, helping them define relatable goals, building their plan with visual feedback, presenting a simple summary, and offering next steps without pressure. You might want to: Add personalization options like user profiles or avatars. Implement a dark mode toggle for accessibility. Create additional goal templates and saving strategies.â Since Figma Make had built the prototype, not her, Kate felt like she was User 0, and clicked her way through the interactive prototype, making a mental note of what worked, what didnât. She was impressed to see that she could not only select a common goal (buy a home, save for education, and so on) but she could also create her own goal (hello DeâLonghi Coffee Maker) and it carried over for the rest of the experience. This was something she had never been able to do in Figma Design! Despite some obvious misses like a missing header and navigation, and some buttons not working, she was impressed! Kate tried the option to âPublishâ and it gave her a link that she immediately shared with her product and engineering partners. A few minutes later, they joined her in the conference room, exploring it together. The engineer scanned the code, didnât seem impressed, but said it would work as a disposable prototype. Kate prompted Figma Make to add an orange header and app navigation, and this time the trio kept their eyes peeled as they saw the progress in code and in English. The results were pretty good. They spent the next hour making changes to get it ready for testing. Even though he didnât admit it, the engineer seemed impressed with the result, if not the code. By late afternoon, they had a functioning interactive prototype. Kate fed ChatGPT the prototype link and asked it to create a usability testing script. It came up with a basic, but complete test script, including a checklist for observers to take notes. Kate went through the script carefully and updated it to add probing questions about AI transparency, emotional check-ins, more specific task scenarios, and a post-test debrief that looped back to the sprint goal. Kate did a dry run with her product partner, who teased her: âDid you really need me? Couldnât your AI do it?â It hadnât occurred to her, but she was now curious! âAct as a Gen Z user seeing this interactive prototype for the first time. How would you react to the language, steps, and tone? What would make you feel more confident or in control?â It worked! ChatGPT simulated user feedback for the first screen and asked if she wanted it to continue. âYes, please,â she typed. A few seconds later, she was reading what could have very well been a screen-by-screen transcript from a test. Kate was still processing what she had seen as she drove home, happy she didnât have to stay late. The simulated test using AI appeared impressive at first glance. But the more she thought about it, the more disturbing it became. The output didnât mention what the simulated user clicked, and if she had asked, she probably would have received an answer. But how useful would that be? After almost missing her exit, she forced herself to think about eating a relaxed meal at home instead of her usual Prototype-Thursday-Multitasking-Pizza-Dinner. Kateâs Reflection Today was the most meta Iâve felt all week: building a prototype about AI, with AI, while being coached by AI. And it didnât all go the way I expected. While ChatGPT didnât deliver prototype screens, Figma Make coded a working, interactive prototype with interactions I couldnât have built in Figma Design. I used curiosity and experimentation today, by asking: What if I reworded this? What if I flipped that flow? AI moved fast, but I had to keep steering. But I have to admit that tweaking the prototype by changing the words, not code, felt like magic! Critical thinking isnât optional anymore â it is table stakes. My impromptu ask of ChatGPT to simulate a Gen Z user testing my flow? That part both impressed and unsettled me. Iâm going to need time to process this. But that can wait until next week. Tomorrow, I test with 5 Gen Zs â real people. Friday: Test (aka Prototype Meets User) Day 5 in a design sprint is a culmination of the weekâs work from understanding the problem, exploring solutions, choosing the best, and building a prototype. Itâs when teams interview users and learn by watching them react to the prototype and seeing if it really matters to them. As Kate prepped for the tests, she grounded herself in the sprint problem statement and the users: âHow might we help Gen Z users confidently take their first financial action in our app â in a way that feels simple, safe, and puts them in control?â She clicked through the prototype one last time â the link still worked! And just in case, she also had screenshots saved. Kate moderated the five tests while her product and engineering partners observed. The prototype may have been AI-generated, but the reactions were human. She observed where people hesitated, what made them feel safe and in control. Based on the participant, she would pivot, go off-script, and ask clarifying questions, getting deeper insights. After each session, she dropped the transcripts and their notes into ChatGPT, asking it to summarize that userâs feedback into pain points, positive signals, and any relevant quotes. At the end of the five rounds, Kate prompted them for recurring themes to use as input for their reflection and synthesis. The trio combed through the results, with an eye out for any suspicious AI-generated results. They ran into one: âUsers Trust AIâ. Not one user mentioned or clicked the âWhy this?â link, but AI possibly assumed transparency features worked because they were available in the prototype. They agreed that the prototype resonated with users, allowing all to easily set their financial goals, and identified a couple of opportunities for improvement: better explaining AI-generated plans and celebrating âwinâ moments after creating a plan. Both were fairly easy to address during their product build process. That was a nice end to the week: another design sprint wrapped, and Kateâs first AI-augmented design sprint! She started Monday anxious about falling behind, overwhelmed by options. She closed Friday confident in a validated concept, grounded in real user needs, and empowered by tools she now knew how to steer. Kateâs Reflection Test driving my prototype with AI yesterday left me impressed and unsettled. But todayâs tests with people reminded me why we test with real users, not proxies or people who interact with users, but actual end users. And GenAI is not the user. Five tests put my designerly skill of observation to the test. GenAI helped summarize the test transcripts quickly but snuck in one last hallucination this week â about AI! With AI, donât trust â always verify! Critical thinking is not going anywhere. AI can move fast with words, but only people can use empathy to move beyond words to truly understand human emotions. My next goal is to learn to talk to AI better, so I can get better results. Conclusion Over the course of five days, Kate explored how AI could fit into her UX work, not by reading articles or LinkedIn posts, but by doing. Through daily experiments, iterations, and missteps, she got comfortable with AI as a collaborator to support a design sprint. It accelerated every stage: synthesizing user feedback, generating divergent ideas, giving feedback, and even spinning up a working prototype, as shown below. What was clear by Friday was that speed isnât insight. While AI produced outputs fast, it was Kateâs designerly skills â curiosity, empathy, observation, visual communication, experimentation, and most importantly, critical thinking and a growth mindset â that turned data and patterns into meaningful insights. She stayed in the driverâs seat, verifying claims, adjusting prompts, and applying judgment where automation fell short. She started the week on Monday, overwhelmed, her confidence dimmed by uncertainty and the noise of AI hype. She questioned her relevance in a rapidly shifting landscape. By Friday, she not only had a validated concept but had also reshaped her entire approach to design. She had evolved: from AI-curious to AI-confident, from reactive to proactive, from unsure to empowered. Her mindset had shifted: AI was no longer a threat or trend; it was like a smart intern she could direct, critique, and collaborate with. She didnât just adapt to AI. She redefined what it meant to be a designer in the age of AI. The experience raised deeper questions: How do we make sure AI-augmented outputs are not made up? How should we treat AI-generated user feedback? Where do ethics and human responsibility intersect? Besides a validated solution to their design sprint problem, Kate had prototyped a new way of working as an AI-augmented designer. The question now isnât just âShould designers use AI?â. Itâs âHow do we work with AI responsibly, creatively, and consciously?â. Thatâs what the next article will explore: designing your interactions with AI using a repeatable framework. Poll: If you could design your own AI assistant, what would it do? Assist with ideation? Research synthesis? Identify customer pain points? Or something else entirely? Share your idea, and in the spirit of learning by doing, weâll build one together from scratch in the third article of this series: Building your own CustomGPT. Resources Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days, by Jake Knapp The Design Sprint Figma Make âOpenAI Appeals âSweeping, Unprecedented Orderâ Requiring It Maintain All ChatGPT Logsâ, Vanessa Taylor Tools As mentioned earlier, ChatGPT was the general-purpose LLM Kate leaned on, but you could swap it out for Claude, Gemini, Copilot, or other competitors and likely get similar results (or at least similarly weird surprises). Here are some alternate AI tools that might suit each sprint stage even better. Note that with dozens of new AI tools popping up every week, this list is far from exhaustive. Stage Tools Capability Understand Dovetail, UserTestingâs Insights Hub, Marvin Summarize & Synthesize data Sketch Any LLM, Musely Brainstorm concepts and ideas Decide Any LLM Critique/provide feedback Prototype UIzard, UXPilot, Visily, Krisspy, Figma Make, Lovable, Bolt Create wireframes and prototypes Test UserTesting, UserInterviews, PlaybookUX, Maze, plus tools from the Understand stage Moderated and unmoderated user tests/synthesis
The Double-Edged Sustainability Sword Of AI In Web Design
AI has introduced huge efficiencies for web designers and is frequently being touted as the key to unlocking sustainable design and development. But do these gains outweigh the environmental cost of using energy-hungry AI tools?
Artificial intelligence is increasingly automating large parts of design and development workflows â tasks once reserved for skilled designers and developers. This streamlining can dramatically speed up project delivery. Even back in 2023, AI-assisted developers were found to complete tasks twice as fast as those without. And AI tools have advanced massively since then. Yet this surge in capability raises a pressing dilemma: Does the environmental toll of powering AI infrastructure eclipse the efficiency gains? We can create websites faster that are optimized and more efficient to run, but the global consumption of energy by AI continues to climb. As awareness grows around the digital sectorâs hidden ecological footprint, web designers and businesses must grapple with this double-edged sword, weighing the grid-level impacts of AI against the cleaner, leaner code it can produce. The Good: How AI Can Enhance Sustainability In Web Design Thereâs no disputing that AI-driven automation has introduced higher speeds and efficiencies to many of the mundane aspects of web design. Tools that automatically generate responsive layouts, optimize image sizes, and refactor bloated scripts should free designers to focus on completing the creative side of design and development. By some interpretations, these accelerated project timelines could represent a reduction in the required energy for development, and speedier production should mean less energy used. Beyond automation, AI excels at identifying inefficiencies in code and design, as it can take a much more holistic view and assess things as a whole. Advanced algorithms can parse through stylesheets and JavaScript files to detect unused selectors or redundant logic, producing leaner, faster-loading pages. For example, AI-driven caching can increase cache hit rates by 15% by improving data availability and reducing latency. This means more user requests are served directly from the cache, reducing the need for data retrieval from the main server, which reduces energy expenditure. AI tools can utilize next-generation image formats like AVIF or WebP, as theyâre basically designed to be understood by AI and automation, and selectively compress assets based on content sensitivity. This slashes media payloads without perceptible quality loss, as the AI can use Generative Adversarial Networks (GANs) that can learn compact representations of data. AIâs impact also brings sustainability benefits via user experience (UX). AI-driven personalization engines can dynamically serve only the content a visitor needs, which eliminates superfluous scripts or images that they donât care about. This not only enhances perceived performance but reduces the number of server requests and data transferred, cutting downstream energy use in network infrastructure. With the right prompts, generative AI can be an accessibility tool and ensure sites meet inclusive design standards by checking against accessibility standards, reducing the need for redesigns that can be costly in terms of time, money, and energy. So, if you can take things in isolation, AI can and already acts as an important tool to make web design more efficient and sustainable. But do these gains outweigh the cost of the resources required in building and maintaining these tools? The Bad: The Environmental Footprint Of AI Infrastructure Yet the carbon savings engineered at the page level must be balanced against the prodigious resource demands of AI infrastructure. Large-scale AI hinges on data centers that already account for roughly 2% of global electricity consumption, a figure projected to swell as AI workloads grow. The International Energy Agency warns that electricity consumption from data centers could more than double by 2030 due to the increasing demand for AI tools, reaching nearly the current consumption of Japan. Training state-of-the-art language models generates carbon emissions on par with hundreds of transatlantic flights, and inference workloads, serving billions of requests daily, can rival or exceed training emissions over a modelâs lifetime. Image generation tasks represent an even steeper energy hill to climb. Producing a single AI-generated image can consume energy equivalent to charging a smartphone. As generative design and AI-based prototyping become more common in web development, the cumulative energy footprint of these operations can quickly undermine the carbon savings achieved through optimized code. Water consumption forms another hidden cost. Data centers rely heavily on evaporative cooling systems that can draw between one and five million gallons of water per day, depending on size and location, placing stress on local supplies, especially in drought-prone regions. Studies estimate a single ChatGPT query may consume up to half a liter of water when accounting for direct cooling requirements, with broader AI use potentially demanding billions of liters annually by 2027. Resource depletion and electronic waste are further concerns. High-performance components underpinning AI services, like GPUs, can have very small lifespans due to both wear and tear and being superseded by more powerful hardware. AI alone could add between 1.2 and 5 million metric tons of e-waste by 2030, due to the continuous demand for new hardware, amplifying one of the worldâs fastest-growing waste streams. Mining for the critical minerals in these devices often proceeds under unsustainable conditions due to a lack of regulations in many of the environments where rare metals can be sourced, and the resulting e-waste, rich in toxic metals like lead and mercury, poses another form of environmental damage if not properly recycled. Compounding these physical impacts is a lack of transparency in corporate reporting. Energy and water consumption figures for AI workloads are often aggregated under general data center operations, which obscures the specific toll of AI training and inference among other operations. And the energy consumption reporting of the data centres themselves has been found to have been obfuscated. Reports estimate that the emissions of data centers are up to 662% higher than initially reported due to misaligned metrics, and âcreativeâ interpretations of what constitutes an emission. This makes it hard to grasp the true scale of AIâs environmental footprint, leaving designers and decision-makers unable to make informed, environmentally conscious decisions. Do The Gains From AI Outweigh The Costs? Some industry advocates argue that AIâs energy consumption isnât as catastrophic as headlines suggest. Some groups have challenged âalarmistâ projections, claiming that AIâs current contribution of âjustâ 0.02% of global energy consumption isnât a cause for concern. Proponents also highlight AIâs supposed environmental benefits. There are claims that AI could reduce economy-wide greenhouse gas emissions by 0.1% to 1.1% through efficiency improvements. Google reported that five AI-powered solutions removed 26 million metric tons of emissions in 2024. The optimistic view holds that AIâs capacity to optimize everything from energy grids to transportation systems will more than compensate for its data center demands. However, recent scientific analysis reveals these arguments underestimate AIâs true impact. MIT found that data centers already consume 4.4% of all US electricity, with projections showing AI alone could use as much power as 22% of US households by 2028. Research indicates AI-specific electricity use could triple from current levels annually by 2028. Moreover, Harvard research revealed that data centers use electricity with 48% higher carbon intensity than the US average. Advice For Sustainable AI Use In Web Design Despite the environmental costs, AIâs use in business, particularly web design, isnât going away anytime soon, with 70% of large businesses looking to increase their AI investments to increase efficiencies. AIâs immense impact on productivity means those not using it are likely to be left behind. This means that environmentally conscious businesses and designers must find the right balance between AIâs environmental cost and the efficiency gains it brings. Make Sure You Have A Strong Foundation Of Sustainable Web Design Principles Before you plug in any AI magic, start by making sure the bones of your site are sustainable. Lean web fundamentals, like system fonts instead of hefty custom files, minimal JavaScript, and judicious image use, can slash a pageâs carbon footprint by stripping out redundancies that increase energy consumption. For instance, the global average web page emits about 0.8g of COâ per view, whereas sustainably crafted sites can see a roughly 70% reduction. Once that lean baseline is in place, AI-driven optimizations (image format selection, code pruning, responsive layout generation) arenât adding to bloat but building on efficiency, ensuring every joule spent on AI actually yields downstream energy savings in delivery and user experience. Choosing The Right Tools And Vendors In order to make sustainable tool choices, transparency and awareness are the first steps. Many AI vendors have pledged to work towards sustainability, but independent audits are necessary, along with clear, cohesive metrics. Standardized reporting on energy and water footprints will help us understand the true cost of AI tools, allowing for informed choices. You can look for providers that publish detailed environmental reports and hold third-party renewable energy certifications. Many major providers now offer PUE (Power Usage Effectiveness) metrics alongside renewable energy matching to demonstrate real-world commitments to clean power. When integrating AI into your build pipeline, choosing lightweight, specialized models for tasks like image compression or code linting can be more sustainable than full-scale generative engines. Task-specific tools often use considerably less energy than general AI models, as general models must process what task you want them to complete. There are a variety of guides and collectives out there that can guide you on choosing the âgreenâ web hosts that are best for your business. When choosing AI-model vendors, you should look at options that prioritize âefficiency by designâ: smaller, pruned models and edge-compute deployments can cut energy use by up to 50% compared to monolithic cloud-only models. Theyâre trained for specific tasks, so they donât have to expend energy computing what the task is and how to go about it. Using AI Tools Sustainably Once youâve chosen conscientious vendors, optimize how you actually use AI. You can take steps like batching non-urgent inference tasks to reduce idle GPU time, an approach shown to lower energy consumption overall compared to requesting ad-hoc, as you donât have to keep running the GPU constantly, only when you need to use it. Smarter prompts can also help make AI usage slightly more sustainable. Sam Altman of ChatGPT revealed early in 2025 that peopleâs propensity for saying âpleaseâ and âthank youâ to LLMs is costing millions of dollars and wasting energy as the Generative AI has to deal with extra phrases to compute that arenât relevant to its task. You need to ensure that your prompts are direct and to the point, and deliver the context required to complete the task to reduce the need to reprompt. Additional Strategies To Balance AIâs Environmental Cost On top of being responsible with your AI tool choice and usage, there are other steps you can take to offset the carbon cost of AI usage and enjoy the efficiency benefits it brings. Organizations can reduce their own emissions and use carbon offsetting to reduce their own carbon footprint as much as possible. Combined with the apparent sustainability benefits of AI use, this approach can help mitigate the harmful impacts of energy-hungry AI. You can ensure that youâre using green server hosting (servers run on sustainable energy) for your own site and cloud needs beyond AI, and refine your content delivery network (CDN) to ensure your sites and apps are serving compressed, optimized assets from edge locations, cutting the distance data must travel, which should reduce the associated energy use. Organizations and individuals, particularly those with thought leadership status, can be advocates pushing for transparent sustainability specifications. This involves both lobbying politicians and regulatory bodies to introduce and enforce sustainability standards and ensuring that other members of the public are kept aware of the environmental costs of AI use. Itâs only through collective action that weâre likely to see strict enforcement of both sustainable AI data centers and the standardization of emissions reporting. Regardless, it remains a tricky path to walk, along the double-edged sword of AIâs use in web design. Use AI too much, and youâre contributing to its massive carbon footprint. Use it too little, and youâre likely to be left behind by rivals that are able to work more efficiently and deliver projects much faster. The best environmentally conscious designers and organizations can currently do is attempt to navigate it as best they can and stay informed on best practices. Conclusion We canât dispute that AI use in web design delivers on its promise of agility, personalization, and resource savings at the page-level. Yet without a holistic view that accounts for the environmental demands of AI infrastructure, these gains risk being overshadowed by an expanding energy and water footprint. Achieving the balance between enjoying AIâs efficiency gains and managing its carbon footprint requires transparency, targeted deployment, human oversight, and a steadfast commitment to core sustainable web practices.
Beyond The Hype: What AI Can Really Do For Product Design
AI tools are improving fast, but itâs still not clear how they fit into a real product design workflow. Nikita Samutin walks through four core stages â from analytics and ideation to prototyping and visual design â to show where AI fits and where it doesnât, illustrated with real-world examples.
These days, itâs easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. Whatâs much harder to find is a clear view of how AI is actually integrated into the everyday workflow of a product designer â not for experimentation, but for real, meaningful outcomes. Iâve gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, Iâve built a simple, repeatable workflow that significantly boosts my productivity. In this article, Iâll share whatâs already working and break down some of the most common objections Iâve encountered â many of which Iâve faced personally. Stage 1: Idea Generation Without The ClichĂ©s Pushback: âWhenever I ask AI to suggest ideas, I just get a list of clichĂ©s. It canât produce the kind of creative thinking expected from a product designer.â Thatâs a fair point. AI doesnât know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to âfeed itâ all the documentation you have. But thatâs a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AIâs answers become vague and unfocused. Current-gen models can technically process thousands of words, but the longer the input, the higher the risk of missing something important, especially content buried in the middle. This is known as the âlost in the middleâ problem. To get meaningful results, AI doesnât just need more information â it needs the right information, delivered in the right way. Thatâs where the RAG approach comes in. How RAG Works Think of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary â a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of âcard catalog,â called a vector database. When you ask a question, the assistant doesnât reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer. How Is This Different from Just Dumping a Doc into the Chat? Letâs break it down: Typical chat interaction Itâs like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is âin front of them,â but itâs easy to miss something, especially if itâs in the middle. This is exactly what the âlost in the middleâ issue refers to. RAG approach You ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. Itâs faster and more accurate, but it introduces a few new risks: Ambiguous question You ask, âHow can we make the project safer?â and the assistant brings you documents about cybersecurity, not finance. Mixed chunks A single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant canât tell what the core topic is. Semantic gap You ask, âHow can we speed up the app?â but the document says, âOptimize API response time.â For a human, thatâs obviously related. For a machine, not always. These arenât reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start? Start With Three Short, Focused Documents These three short documents will give your AI assistant just enough context to be genuinely helpful: Product Overview & Scenarios A brief summary of what your product does and the core user scenarios. Target Audience Your main user segments and their key needs or goals. Research & Experiments Key insights from interviews, surveys, user testing, or product analytics. Each document should focus on a single topic and ideally stay within 300â500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant. Language Matters In practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations: English prompt + English documents: Consistently accurate and relevant results. Non-English prompt + English documents: Quality dropped sharply. The AI struggled to match the query with the right content. Non-English prompt + non-English documents: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable. Takeaway: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, youâre free to use other languages. A challenge also highlighted in this 2024 study on multilingual retrieval. From Outsider to Teammate: Giving AI the Context It Needs Once your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas â the way a mid-level or senior designer would. Hereâs an example of a prompt that works well for me: Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt). The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use. Please include: Possible overlaps in user goals, actions, or scenarios; Potential confusion if both features are launched at the same time; Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic); Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques; Onboarding screens or explanatory elements that might help users understand both features. If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on. AI Needs Context, Not Just Prompts If you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just more information, but better, more structured information. Building a usable knowledge base isnât difficult. And you donât need a full-blown RAG system to get started. Many of these principles work even in a regular chat: well-organized content and a clear question can dramatically improve how helpful and relevant the AIâs responses are. Thatâs your first step in turning AI from a novelty into a practical tool in your product design workflow. Stage 2: Prototyping and Visual Experiments Pushback: âAI only generates obvious solutions and canât even build a proper user flow. Itâs faster to do it manually.â Thatâs a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective. For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can âflipâ to reveal a prize. I couldnât recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed. At the prototyping stage, AI can be a strong creative partner in two areas: UI element ideation It can generate dozens of interactive patterns, including ones you might not think of yourself. Micro-animation generation It can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers. AI can also be applied to multi-screen prototypes, but itâs not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks â individual screens, elements, or animations â where it can kick off the thinking process and save hours of trial and error. A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed. Hereâs another valuable way to use AI in design â as a stress-testing tool. Back in 2023, Google Research introduced PromptInfuser, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasnât to generate new UI, but to check how well AI could operate inside existing layouts â placing content into specific containers, handling edge-case inputs, and exposing logic gaps early. The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input â a clear gain in design accuracy, not just speed. That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them. Stage 3: Finalizing The Interface And Visual Style Pushback: âAI canât match our visual style. Itâs easier to just do it by hand.â This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often donât feel like they belong in your product. They tend to be either overly decorative or overly simplified. And this is a real limitation. In my experience, todayâs models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches: Direct integration with a component library. I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. Other designers have run into similar issues, noting that library support in Figma Make is still limited and often unstable. Uploading styles as JSON. Instead of a full component library, I tried uploading only the exported styles â colors, fonts â in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied. Two-step approach: structure first, style second. What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result â though still far from pixel-perfect. So yes, AI still canât help you finalize your UI. It doesnât replace hand-crafted design work. But itâs very useful in other ways: Quickly creating a visual concept for discussion. Generating âwhat ifâ alternatives to existing mockups. Exploring how your interface might look in a different style or direction. Acting as a second pair of eyes by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work. AI wonât save you five hours of high-fidelity design time, since youâll probably spend that long fixing its output. But as a visual sparring partner, itâs already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator. Stage 4: Product Feedback And Analytics: AI As A Thinking Exosuit Product designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts. As Vitaly Friedman rightly pointed out in one of his columns, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. The strength of AI isnât in inventing data but in processing it at scale. Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages. Simply counting the percentages for each of the five predefined reasons wasnât enough. I wanted to know: Are there specific times of day when users churn more? Do the reasons differ by region? Is there a correlation between user exits and system load? The real challenge was... figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done âfor meâ by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldnât have been able to reach that level of insight on my own at all. AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for whatâs truly valuable: asking the right questions. A few practical notes: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task. AI Is Not An Autopilot But A Co-Pilot AI in design is only as good as the questions you ask it. It doesnât do the work for you. It doesnât replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes itâs still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer. But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Donât wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process. Letâs Summarize If you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. Thatâs the âlost in the middleâ problem. The RAG approach helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context. Clear, focused prompts work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track. A well-structured knowledge bas makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp. Use English for both your prompts and your documents. Even multilingual models are most reliable when working in English, especially for retrieval. Most importantly: treat AI as a creative partner. It wonât replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts. Further Reading âAI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Qualityâ, Cindy Brummer This piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on â before you dive into RAG or idea generation. â8 essential tips for using Figma Makeâ, Alexia Danton While this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if youâre experimenting with visual tools and structured prompting. âWhat Is Retrieval-Augmented Generation aka RAGâ, Rick Merritt If you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.
The Psychology Of Color In UX And Digital Products
Rodolpho Henrique guides you through the essential aspects of color in digital design and user experience, from the practical steps of creating effective and scalable color palettes to critical accessibility considerations.
Color plays a pivotal role in crafting compelling user experiences and successful digital products. Itâs far more than just aesthetics; color strategically guides users, establishes brand identity, and evokes specific emotions. Beyond functionality, color is also a powerful tool for brand recognition and emotional connection. Consistent use of brand colors across a digital product reinforces identity and builds trust. Different hues carry cultural and psychological associations, allowing designers to subtly influence user perception and create the desired mood. A thoughtful and deliberate approach to color in UX design elevates the user experience, strengthens brand presence, and contributes significantly to the overall success and impact of digital products. In this article, we will talk about the importance of color and why they are important for creating emotional connection and delivering consistent and accessible digital products. Well-chosen color palettes enhance usability by creating visual hierarchies, highlighting interactive elements, and providing crucial feedback on screens. For instance, a bright color might draw attention to a call-to-action button, while consistent color coding can help users navigate complex interfaces intuitively. Color also contributes significantly to accessibility, ensuring that users with visual impairments can still effectively interact with digital products. By carefully considering contrast ratios and providing alternative visual cues, designers can create inclusive experiences that cater to a wider audience. The colors we choose are the silent language of our digital products, and speaking it fluently is essential for success. Communicating Brand Identity Through Color In The Digital Space A thoughtfully curated and vibrant color palette becomes a critical differentiator, allowing a brand to stand out amidst the digital noise and cultivate stronger connections with the audience. Far beyond mere decoration, color acts as a visual shorthand, instantly conveying a brandâs personality, its underlying values, and its unique essence. According to the American Marketing Association, vibrant colors, in particular, possess an inherent magnetism, drawing the eye and etching themselves into memory within the online environment. They infuse the brand with energy and dynamism, projecting approachability and memorability in a way that more muted tones often cannot. Consistency: The Core Of Great Design Consistency is important because it fosters trust and familiarity, allowing users to quickly identify and connect with the brand in the online landscape. The strategic deployment of vibrant colors is especially crucial for brands seeking to establish themselves and flourish within the digital ecosystem. In the absence of physical storefronts or tangible in-person interactions, visual cues become paramount in shaping user perception and building brand recognition. A carefully selected primary color, supported by a complementary and equally energetic secondary palette, can become synonymous with a brandâs digital presence. A consistent application of the right colors across different digital touchpoints â from the logo and website design to the user interface of an app and engaging social media campaigns â creates a cohesive and instantly recognizable visual language. Several sources and professionals agree that the psychology behind the colors plays a significant role in shaping brand perception. The publication Insights Psychology, for instance, explains how colors can create emotional and behavioural responses. Vibrant colors often evoke strong emotions and associations. A bold, energetic red, for example, might communicate passion and excitement, while a bright, optimistic yellow could convey innovation and cheerfulness. By consciously aligning their color choices with their brand values and target audience preferences, digitally-native brands can create a powerful emotional resonance. Beyond Aesthetics: How Color Psychologically Impacts User Behavior In Digital As designers working with digital products, weâve learned that color is far more than a superficial layer of visual appeal. Itâs a potent, often subconscious, force that shapes how users interact with and feel about the digital products we build. Weâre not just painting pixels, weâre conducting a psychological symphony, carefully selecting each hue to evoke specific emotions, guide behavior, and ultimately forge a deeper connection with the user. The initial allure of a color palette might be purely aesthetic, but its true power lies in its ability to bypass conscious thought and tap directly into our emotional core. Think about the subtle unease that might creep in when encountering a predominantly desaturated interface for a platform promising dynamic content, or the sense of calm that washes over you when a learning application utilizes soft, analogous colors. These are not arbitrary responses; theyâre deeply rooted in our evolutionary history and cultural conditioning. To understand how colors psychologically impact user behavior in digital, we first need to understand how colors are defined. In digital design, colors are precisely defined using the HSB model, which stands for Hue, Saturation, and Brightness. This model provides a more intuitive way for designers to think about and manipulate color compared to other systems like RGB (Red, Green, Blue). Here is a quick breakdown of each component: Hue This is the pure color itself, the essence that we typically name, such as red, blue, green, or yellow. On a color wheel, hue is represented as an angle ranging from 0 to 360 degrees. For example, 0 is red, 120 is green, and 240 is blue. Think of it as the specific wavelength of light that our eyes perceive as a particular color. In UX, selecting the base hues is often tied to brand identity and the overall feeling you want to convey. Saturation Saturation refers to the intensity or purity of the hue. It describes how vivid or dull the color appears. A fully saturated color is rich and vibrant, while a color with low saturation appears muted, grayish, or desaturated. Saturation is typically expressed as a percentage, from 0% (completely desaturated, appearing as a shade of gray) to 100% (fully saturated, the purest form of the hue). In UX, saturation levels are crucial for creating visual hierarchy and drawing attention to key elements. Highly saturated colors often indicate interactive elements or important information, while lower saturation can be used for backgrounds or less critical content. Brightness Brightness, sometimes also referred to as a value or lightness, indicates how light or dark a color appears. Itâs the amount of white or black mixed into the hue. Brightness is also usually represented as a percentage, ranging from 0% (completely black, regardless of the hue or saturation) to 100% (fully bright). At 100% brightness and 0% saturation, you get white. In UX, adjusting brightness is essential for creating contrast and ensuring readability. Sufficient brightness contrast between text and background is a fundamental accessibility requirement. Furthermore, variations in brightness within a color palette can create visual depth and subtle distinctions between UI elements. By understanding and manipulating these 3 color dimensions, digital designers have precise control over their color choices. This allows for the creation of harmonious and effective color palettes that not only align with brand guidelines but also strategically influence user behavior. Just as in the physical world, colors in digital also carry symbolic meanings and trigger subconscious associations. Understanding these color associations is essential for UX designers aiming to craft experiences that not only look appealing but also resonate emotionally and guide user behavior effectively. As the EMB Global states, the way we perceive and interpret color is not universal, yet broad patterns of association exist. For instance, the color blue often evokes feelings of trust, stability, and calmness. This association stems from the natural world â the vastness of the sky and the tranquility of deep waters. In the digital space, this makes blue a popular choice for financial institutions, corporate platforms, and interfaces aiming to project reliability and security. However, the specific shade and context matter immensely. A bright, electric blue can feel energetic and modern, while a muted and darker blue might convey a more serious and authoritative tone. Kendra Cherry, a psychosocial and rehabilitation specialist and author of the book Everything Psychology, explains very well how colors evoke certain responses in us. For example, the color green is intrinsically linked to nature, often bringing about feelings of growth, health, freshness, and tranquility. It can also symbolize prosperity in some cultures. In digital design, green is frequently used for health and wellness applications, environmental initiatives, and platforms emphasizing sustainability. A vibrant lime green can feel energetic and youthful, while a deep forest green can evoke a sense of groundedness and organic quality. Yellow, the color of sunshine, is generally associated with optimism, happiness, energy, and warmth. Itâs attention-grabbing and can create a sense of playfulness. In digital interfaces, yellow is often used for highlighting important information, calls to action (though sparingly, as too much can be overwhelming), or for brands wanting to project a cheerful and approachable image. Red, a color with strong physiological effects, typically evokes excitement, passion, urgency, and sometimes anger or danger. It commands attention and can stimulate action. Digitally, red is often used for alerts, error messages, sales promotions, or for brands wanting to project a bold and energetic identity. Its intensity requires careful consideration, as overuse can lead to user fatigue or anxiety. Orange blends the energy of red with the optimism of yellow, often conveying enthusiasm, creativity, and friendliness. It can feel less aggressive than red but still commands attention. In digital design, orange is frequently used for calls to action, highlighting sales or special offers, and for brands aiming to appear approachable and innovative. Purple has historically been associated with royalty and luxury. It can evoke feelings of creativity, wisdom, and mystery. In digital contexts, purple is often used for brands aiming for a sophisticated or unique feel, particularly in areas like luxury goods, beauty, or spiritual and creative platforms. Black often signifies sophistication, power, elegance, and sometimes mystery. In digital design, black is frequently used for minimalist interfaces, luxury brands, and for creating strong contrast with lighter elements. The feeling it evokes heavily depends on the surrounding colors and overall design aesthetic. White is generally associated with purity, cleanliness, simplicity, and neutrality. It provides a sense of spaciousness and allows other colors to stand out. In digital design, white space is a crucial element, and white is often used as a primary background color to create a clean and uncluttered feel. Gray is often seen as neutral, practical, and sometimes somber or conservative. In digital interfaces, various shades of gray are essential for typography, borders, dividers, and creating visual hierarchy without being overly distracting. Evoking Emotions In Digital Interfaces Imagine an elegant furniture application. The designers might choose a primary palette of soft, desaturated blues and greens, accented with gentle earth tones. The muted blues could subtly induce a feeling of calmness and tranquility, aligning with the appâs core purpose of relaxation. The soft greens might evoke a sense of nature and well-being, further reinforcing the theme of peace and mental clarity. The earthy browns could ground the visual experience, creating a feeling of stability and connection to the natural world. Now, consider a platform for extreme investment enthusiasts. The color palette might be dominated by high-energy oranges and reds, contrasted with stark blacks and sharp whites. The vibrant oranges could evoke feelings of excitement and adventure, while the bold red might amplify the sense of adrenaline and intensity. The black and white could provide a sense of dynamism and modernity, reflecting the fast-paced nature of the activities. By consciously understanding and applying these color associations, digital designers can move beyond purely aesthetic choices and craft experiences that resonate deeply with users on an emotional level, leading to more engaging, intuitive, and successful digital products. Color As A Usability Tool Choosing the right colors isnât about adhering to fleeting trends; itâs about ensuring that our mobile applications and websites are usable by the widest possible audience, including individuals with visual impairments. Improper color choices can create significant barriers, rendering content illegible, interactive elements indistinguishable, and ultimately excluding a substantial portion of potential users. Prioritizing color with accessibility in mind is not just a matter of ethical design; itâs a fundamental aspect of creating inclusive and user-friendly digital experiences that benefit everyone. For individuals with low vision, sufficient color contrast between text and background is paramount for readability. Imagine trying to decipher light gray text on a white background â a common design trend that severely hinders those with even mild visual impairments. Adhering to Web Content Accessibility Guidelines (WCAG) contrast ratios ensures that text remains legible and understandable. Furthermore, color blindness, affecting a significant percentage of the population, necessitates the use of redundant visual cues. Relying solely on color to convey information, such as indicating errors in red without an accompanying text label, excludes colorblind users. By pairing color with text, icons, or patterns, we ensure that critical information is conveyed through multiple sensory channels, making it accessible to all. Thoughtful color selection, therefore, is not an optional add-on but an integral component of designing digital products that are truly usable and equitable. Choosing Your Palette As designers, we need a strategic approach to choosing color palettes, considering various factors to build a scalable and impactful color system. Hereâs a breakdown of the steps and considerations involved: 1. Deep Dive Into Brand Identity And Main Goals The journey begins with a thorough understanding of the brand itself. What are its core values? What personality does it project? Is it playful, sophisticated, innovative? Analyze existing brand guidelines (if any), target audience demographics and psychographics, and the overall goals of the digital product. The color palette should be a visual extension of this identity, reinforcing brand recognition and resonating with the intended users. For instance, a financial app aiming for trustworthiness might lean towards blues and greens, while a creative platform could explore more vibrant and unconventional hues. 2. Understand Color Psychology And Cultural Associations As discussed previously, colors carry inherent psychological and cultural baggage. While these associations are not absolute, they provide a valuable framework for initial exploration. Consider the emotions you want to evoke and research how your target audience might perceive different colors, keeping in mind cultural nuances that can significantly alter interpretations. This step is important to help in making informed decisions that align with the desired user experience and brand perception. 3. Defining The Core Colors Start by identifying the primary color â the dominant hue that represents your brandâs essence. This will likely be derived from the brand logo or existing visual identity. Next, establish a secondary color or two that complement the primary color and provide visual interest and hierarchy. These secondary colors should work harmoniously with the primary, offering flexibility for different UI elements and interactions. 4. Build A Functional Color System A consistent and scalable color palette goes beyond just a few base colors. It involves creating a system of variations for practical application within the digital interface. This typically includes tints and shades, accent colors, and neutral colors. 5. Do Not Forget About Usability And Accessibility Ensure sufficient color contrast between text and background, as well as between interactive elements and their surroundings, to meet WCAG guidelines. Tools are readily available to check color contrast ratios. Test your palette using color blindness simulators to see how it will be perceived by individuals with different types of color vision deficiencies. This helps identify potential issues where information might be lost due to color alone. Visual hierarchy is also important to guide the userâs eye and establish a clear visual story. Important elements should be visually distinct. 6. Testing And Iteration Once you have a preliminary color palette, itâs crucial to test it within the context of your digital product. Create mockups and prototypes to see how the colors work together in the actual interface. Gather feedback from stakeholders and, ideally, conduct user testing to identify any usability or aesthetic issues. Be prepared to iterate and refine your palette based on these insights. A well-defined color palette for the digital medium should be: Consistent, Scalable, Accessible, Brand-aligned, Emotionally resonant, and Functionally effective. By following these steps and keeping these considerations in mind, designers can craft color palettes that are not just visually appealing but also strategically powerful tools for creating effective and accessible digital experiences. Color Consistency: Building Trust And Recognition Through A Harmonized Digital Presence Consistency plays an important role in the whole color ecosystem. By maintaining a unified color scheme for interactive elements, navigation cues, and informational displays, designers create a seamless and predictable user journey, building trust through visual stability. Color consistency directly contributes to brand recognition in the increasingly crowded digital landscape. Just as a logo or typeface becomes instantly identifiable, a consistent color palette acts as a powerful visual signature. When users repeatedly encounter the same set of colors associated with a particular brand, it strengthens their recall and fosters a stronger brand association. This visual consistency extends beyond the core interface to marketing materials, social media presence, and all digital touchpoints, creating a cohesive and memorable brand experience. By strategically and consistently applying a solid and consistent color palette, digital products can cultivate stronger brand recognition, build user trust, and enhance user loyalty.
From Line To Layout: How Past Experiences Shape Your Design Career
Your past shapes who you are as a designer, no matter where your career began or how unexpected your career path may have been. Stephanie Campbell shows how those lessons can sharpen your instincts, strengthen collaboration, and help you become a better designer today.
Design career origin stories often sound clean and linear: a degree in Fine Arts, a lucky internship, or a first job that launches a linear, upward path. But what about those whose paths were not so straight? The ones who came from service, retail, construction, or even firefighting â the messy, winding paths that didnât begin right out of design school â who learned service instincts long before learning design tools? I earned my Associate in Science way later than planned, after 15 years in fine dining, which I once dismissed as a detour delaying my ârealâ career. But in hindsight, it was anything but. Those years built skills and instincts I still rely on daily â in meetings, design reviews, and messy mid-project pivots. Your Past Is Your Advantage I still have the restaurant dream. Whenever Iâm overwhelmed or deep in a deadline, it comes back: Iâm the only one running the restaurant floor. The grill is on fire. Thereâs no clean glassware. Everyone needs their check, their drink, and their table turned. I wake up sweating, and I ask myself, âWhy am I still having restaurant nightmares 15 years into a design career?â Because those jobs wired themselves into how I think and work. Those years werenât just a job but high-stakes training in adaptability, anticipation, and grace under pressure. They built muscle memory: ways of thinking, reacting, and solving problems that still appear daily in my design work. They taught me to adapt, connect with people, and move with urgency and grace. But those same instincts rooted in nightmares can trip you up if youâre unaware. Speed can override thoughtfulness. Constant anticipation can lead to over-complication. The pressure to polish can push you to over-deliver too soon. Embracing your past also means examining it â recognizing when old habits serve you and when they donât. With reflection, those experiences can become your greatest advantage. Lessons From The Line These arenât abstract comparisons. Theyâre instincts built through repetition and real-world pressure, and they show up daily in my design process. Here are five moments from restaurant life that shaped how I think, design, and collaborate today. 1. Reading The Room Reading a customerâs mood begins as soon as they sit down. Through years of trial and error, I refined my understanding of subtle cues, like seating delays indicating frustration or menus set aside, suggesting they want to enjoy cocktails. Adapting my approach based on these signals became instinctual, emerging from countless moments of observation. What I Learned The subtleties of reading a client arenât so different in product design. Contexts differ, but the cues remain similar: project specifics, facial expressions, tone of voice, lack of engagement, or even the âword saladâ of client feedback. With time, these signals become easier to spot, and you learn to ask better questions, challenge assumptions, or offer alternate approaches before misalignment grows. Whether a client is energized and all-in or hesitant and constrained, reading those cues early can make all the difference. Those instincts â like constant anticipation and early intervention â served me well in fine dining, but can hinder the design process if Iâm not in tune with how Iâm reacting. Jumping in too early can lead to over-complicating the design process, solving problems that havenât been voiced (yet), or stepping on othersâ roles. Iâve had to learn to pause, check in with the team, and trust the process to unfold more collaboratively. How I Apply This Today Guide direction with focused options. Early on, share 2â3 meaningful variations, like style tiles or small component explorations, to shape the conversation and avoid overwhelm. Flag misalignment fast. If something feels off, raise it early and loop in the right people. Be intentional about workshop and deliverable formats. Structure or space? Depends on what helps the client open up and share. Pause before jumping in. A sticky note on my screen (âPauseâ) helps me slow down and check assumptions. 2. Speed Vs. Intentionality In fine dining, multitasking wasnât just helpful, it was survival. Every night demanded precision timing, orchestrating every meal step, from the first drink poured to the final dessert plated. The soufflĂ©, for example, was a constant test. It takes precisely 45 minutes â no more, no less. If the guests lingered over appetizers or finished their entrĂ©es too early, that soufflĂ© risked collapse. But fine dining taught me how to handle that volatility. I learned to manage timing proactively, mastering small strategies: an amuse-bouche to buy the kitchen precious minutes, a complimentary glass of champagne to slow a too-quickly paced meal. Multitasking meant constantly adjusting in real-time, keeping a thousand tiny details aligned even when, behind the scenes, chaos loomed. What I Learned Multitasking is a given in product design, just in a different form. While the pressure is less immediate, it is more layered as designers often juggle multiple projects, overlapping timelines, differing stakeholder expectations, and evolving product needs simultaneously. That restaurant instinct to keep numerous plates spinning at the same time? Itâs how I handle shifting priorities, constant Slack pings, regular Figma updates, and unexpected client feedback â without losing sight of the big picture. The hustle and pace of fine dining hardwired me to associate speed with success. But in design, speed can sometimes undermine depth. Jumping too quickly into a solution might mean missing the real problem or polishing the wrong idea. Iâve learned that staying in motion isnât always the goal. Unlike a fast-paced service window, product design invites experimentation and course correction. Iâve had to quiet the internal timer and lean into design with a slower, more intentional nature. How I Apply This Today Make space for inspiration. Set aside time for untasked exploration outside the norm â magazines, bookstores, architecture, or gallery visits â before jumping into design. Build in pause points. Plan breaks between design rounds and schedule reviews after a weekend gap to return with fresh eyes. Stay open to starting over. Let go of work that isnât working, even full comps. Starting fresh often leads to better ideas. 3. Presentation Matters Presentation isnât just a finishing touch in fine dining â itâs everything. Itâs the mint leaf delicately placed atop a dessert, the raspberry glace cascading across the perfectly off-centered espresso cake. The presentation engages every sense: the smell of rare imported truffles on your truffle fries, or the meticulous choreography of four servers placing entrĂ©es in front of diners simultaneously, creating a collective âwowâ moment. An excellent presentation shapes dinersâ emotional connection with their meal â that experience directly impacts how generously they spend, and ultimately, your success. What I Learned A product design presentation, from the initial concept to the handoff, carries that same power. Introducing a new homepage design can feel mechanical or magical, depending entirely on how you frame and deliver it. Just like careful plating shapes a dinerâs experience, clear framing and confident storytelling shape how design is received. Beyond the initial introduction, explain the why behind your choices. Connect patterns to the organic elements of the brandâs identity and highlight how users will intuitively engage with each section. Presentation isnât just about aesthetics; it helps clients connect with the work, understand its value, and get excited to share it. The pressure to get everything right the first time, to present a pixel-perfect comp that âwowsâ immediately, is intense. Sometimes, an excellent presentation isnât about perfection â itâs about pacing, storytelling, and allowing the audience to see themselves in the work. Iâve had to let go of the idea that polish is everything and instead focus on the why, describing it with clarity, confidence, and connection. How I Apply This Today Frame the story first. Lead with the âwhyâ behind the work before showing the âwhatâ. It sets the tone and invites clients into the design. Keep presentations polished. Share fewer, more intentional concepts to reduce distractions and keep focus. Skip the jargon. Clients arenât designers. Use clear, relatable terms. Say âsectionâ instead of âcomponent,â or ârepeatable elementâ instead of âpattern.â Bring designs to life. Use motion, prototypes, and real content to add clarity, energy, and brand relevance. 5. Composure Under Pressure In fine dining, pressure isnât an occasional event â itâs the default setting. Every night is high stakes. Timing is tight, expectations are sky-high, and mistakes are rarely forgiven. Composure becomes your edge. You donât show panic when the kitchen is backed up or when a guest sends a dish back mid-rush. You pivot. You delegate. You anticipate. Some nights, the only thing that kept things on track was staying calm and thinking clearly. âThis notion of problem solving and decision making is key to being a great designer. I think that we need to get really strong at problem identification and then prioritization. All designers are good problem solvers, but the really great designers are strong problem finders.â â Jason Cyr, âHow being a firefighter made me a better designer thinkerâ What I Learned The same principle applies to product design. When pressure mounts â tight timelines, conflicting feedback, or unclear priorities â your ability to stay composed can shift the energy of the entire project. Composure isnât just about being calm; itâs about being adaptable and responsive without reacting impulsively. It helps you hold space for feedback, ask better questions, and move forward with clarity instead of chaos. There have also been plenty of times when a client doesnât resonate with a design, which can feel crushing. You can easily take it personally and internalize the rejection, or you can pause, listen, and course-correct. Iâve learned to focus on understanding the root of the feedback. Often, what seems like a rejection is just discomfort with a small detail, which in most cases can be easily corrected. Perfection was the baseline in restaurants, and pressure drove polish. In design, that mindset can lead to overinvesting in perfection too soon or âfreezingâ under critique. Iâve had to unlearn that success means getting everything right the first time. Now I see messy collaboration and gradual refinement as a mark of success, not failure. How I Apply This Today Use live design to unblock. When timelines are tight and feedback goes in circles, co-designing in real time helps break through stuck points and move forward quickly. Turn critique into clarity. Listen for whatâs underneath the feedback, then ask clarifying questions, or repeat back what youâre hearing to align before acting. Pause when stress builds. If you feel reactive, take a moment to regroup before responding. Frame changes as progress. Normalize iteration as part of the process, and not a design failure. Would I Go Back? I still dream about the restaurant floor. But now, I see it as a reminder â not of where I was stuck, but of where I perfected the instincts I use today. If youâre someone who came to design from another path, try asking yourself: When do I feel strangely at ease while others panic? What used to feel like âjust part of the job,â but now feels like a superpower? Where do I get frustrated because my instincts are different â and maybe sharper? What kinds of group dynamics feel easy to me that others struggle with? What strengths would not exist in me today if I hadnât lived that past life? Once you see the patterns, start using them. Name your edge. Talk about your background as an asset: in intros, portfolios, interviews, or team retrospectives. When projects get messy, lean into what you already know how to do. Trust your instincts. Theyâre real, and theyâre earned. But balance them, too. Stay aware of when your strengths could become blind spots, like speed overriding thoughtfulness. That kind of awareness turns experience into a tool, not a trigger. Your past doesnât need to look like anyone elseâs. It just needs to teach you something. Further Reading âIf I Was Starting My Career Today: Thoughts After 15 Years Spent In UX Designâ (Part One, Part Two), by Andrii Zhdan (Smashing Magazine) In this two-part series, Andrii Zhdan outlines common challenges faced at the start of a design career and offers advice to smooth your journey based on insights from his experience hiring designers. âOvercoming Imposter Syndrome By Developing Your Own Guiding Principles,â by Luis Ouriach (Smashing Magazine) Unfortunately, not everyone has access to a mentor or a guide at the start of the design career, which is why we often have to rely on âworking it outâ by ourselves. In this article, Luis Ouriach tries to help you in this task so that you can walk into the design critique meetings with more confidence and really deliver the best representation of your ideas. âWhy Designers Get Stuck In The Details And How To Stop,â by Nikita Samutin (Smashing Magazine) Designers love to craft, but polishing pixels before the problem is solved is a time sink. This article pinpoints the five traps that lure us into premature detail and then hands you a rescue plan to refocus on goals, ship faster, and keep your craft where it counts. âRediscovering The Joy Of Design,â by Pratik Joglekar (Smashing Magazine) Pratik Joglekar takes a philosophical approach to remind designers about the lost joy within themselves by effectively placing massive importance on mindfulness, introspection, and forward-looking. âLessons Learned As A Designer-Founder,â by Dave Feldman (Smashing Magazine) In this article, Dave Feldman shares his lessons learned and the experiments he has done as a multidisciplinary designer-founder-CEO at an early-stage startup. âHow Designers Should Ask For (And Receive) High-Quality Feedback,â by Andy Budd (Smashing Magazine) Designers often complain about the quality of feedback they get from senior stakeholders without realizing itâs usually because of the way they initially have framed the request. In this article, Andy Budd shares a better way of requesting feedback: rather than sharing a linear case study that explains every design revision, the first thing to do would be to better frame the problem. âHow being a Firefighter made me a better Designer Thinkerâ by Jason Cyr (Medium) The ability to come upon a situation and very quickly start evaluating information, asking questions, making decisions, and formulating a plan is a skill that every firefighter learns to develop, especially as you rise through the ranks and start leading others. âAdvice for making the most of an indirect career path to design,â by Heidi Meredith (Adobe Express Growth) I didnât know anything about design until after I graduated from the University of California, Santa Cruz, with a degree in English Literature/Creative Writing. A mere three months into it, though, I realized I didn't want to write books â I wanted to design them. I want to express my deep gratitude to Sara Wachter-Boettcher, whose coaching helped me find the clarity and confidence to write this piece â and, more importantly, to move forward with purpose in both life and work. And to Lea Alcantara, my design director at Fueled, for being a steady creative force and an inspiring example of thoughtful leadership.
Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases
Prompting isnât just about writing better instructions, but about designing better thinking. Ilia and Marina explore how advanced prompting can empower different product & design use cases, speeding up your workflow and improving results, from research and brainstorming to testing and beyond. Letâs dive in.
AI is almost everywhere â it writes text, makes music, generates code, draws pictures, runs research, chats with you â and apparently even understands people better than they understand themselves?! Itâs a lot to take in. The pace is wild, and new tools pop up faster than anyone has time to try them. Amid the chaos, one thing is clear: this isnât hype, but itâs structural change. According to the Future of Jobs Report 2025 by the World Economic Forum, one of the fastest-growing, most in-demand skills for the next five years is the ability to work with AI and Big Data. That applies to almost every role â including product design. What do companies want most from their teams? Right, efficiency. And AI can make people way more efficient. Weâd easily spend 3x more time on tasks like replying to our managers without AI helping out. Weâre learning to work with it, but many of us are still figuring out how to meet the rising bar. Thatâs especially important for designers, whose work is all about empathy, creativity, critical thinking, and working across disciplines. Itâs a uniquely human mix. At least, thatâs what we tell ourselves. Even as debates rage about AIâs limitations, tools today (June 2025 â timestamp matters in this fast-moving space) already assist with research, ideation, and testing, sometimes better than expected. Of course, not everyone agrees. AI hallucinates, loses context, and makes things up. So how can both views exist at the same time? Very simple. Itâs because both are true: AI is deeply flawed and surprisingly useful. The trick is knowing how to work with its strengths while managing its weaknesses. The real question isnât whether AI is good or bad â itâs how we, as designers, stay sharp, stay valuable, and stay in the loop. Why Prompting Matters Prompting matters more than most people realize because even small tweaks in how you ask can lead to radically different outputs. To see how this works in practice, letâs look at a simple example. Imagine you want to improve the onboarding experience in your product. On the left, you have the prompt you send to AI. On the right, the response you get back. Input Output How to improve onboarding in a SaaS product? đ Broad suggestions: checklists, empty states, welcome modals⊠How to improve onboarding in Product Aâs workspace setup flow? đ Suggestions focused on workspace setup⊠How to improve onboarding in Product Aâs workspace setup step to address user confusion? đ ~10 common pain points with targeted UX fixes for each⊠How to improve onboarding in Product A by redesigning the workspace setup screen to reduce drop-off, with detailed reasoning? đ ~10 paragraphs covering a specific UI change, rationale, and expected impact⊠This side-by-side shows just how much even the smallest prompt details can change what AI gives you. Talking to an AI model isnât that different from talking to a person. If you explain your thoughts clearly, you get a better understanding and communication overall. Advanced prompting is about moving beyond one-shot, throwaway prompts. Itâs an iterative, structured process of refining your inputs using different techniques so you can guide the AI toward more useful results. It focuses on being intentional with every word you put in, giving the AI not just the task but also the path to approach it step by step, so it can actually do the job. Where basic prompting throws your question at the model and hopes for a quick answer, advanced prompting helps you explore options, evaluate branches of reasoning, and converge on clear, actionable outputs. But that doesnât mean simple prompts are useless. On the contrary, short, focused prompts work well when the task is narrow, factual, or time-sensitive. Theyâre great for idea generation, quick clarifications, or anything where deep reasoning isnât required. Think of prompting as a scale, not a binary. The simpler the task, the faster a lightweight prompt can get the job done. The more complex the task, the more structure it needs. In this article, weâll dive into how advanced prompting can empower different product & design use cases, speeding up your workflow and improving your results â whether youâre researching, brainstorming, testing, or beyond. Letâs dive in. Practical Cases In the next section, weâll explore six practical prompting techniques that weâve found most useful in real product design work. These arenât abstract theories â each one is grounded in hands-on experience, tested across research, ideation, and evaluation tasks. Think of them as modular tools: you can mix, match, and adapt them depending on your use case. For each, weâll explain the thinking behind it and walk through a sample prompt. Important note: The prompts youâll see are not copy-paste recipes. Some are structured templates you can reuse with small tweaks; others are more specific, meant to spark your thinking. Use them as scaffolds, not scripts. 1. Task Decomposition By JTBD Technique: Role, Context, Instructions template + Checkpoints (with self-reflection) Before solving any problem, thereâs a critical step we often overlook: breaking the problem down into clear, actionable parts. Jumping straight into execution feels fast, but itâs risky. We might end up solving the wrong thing, or solving it the wrong way. Thatâs where GPT can help: not just by generating ideas, but by helping us think more clearly about the structure of the problem itself. There are many ways to break down a task. One of the most useful in product work is the Jobs To Be Done (JTBD) framework. Letâs see how we can use advanced prompting to apply JTBD decomposition to any task. Good design starts with understanding the user, the problem, and the context. Good prompting? Pretty much the same. Thatâs why most solid prompts include three key parts: Role, Context, and Instructions. If needed, you can also add the expected format and any constraints. In this example, weâre going to break down a task into smaller jobs and add self-checkpoints to the prompt, so the AI can pause, reflect, and self-verify along the way. Role Act as a senior product strategist and UX designer with deep expertise in Jobs To Be Done (JTBD) methodology and user-centered design. You think in terms of user goals, progress-making moments, and unmet needs â similar to approaches used at companies like Intercom, Basecamp, or IDEO. Context You are helping a product team break down a broad user or business problem into a structured map of Jobs To Be Done. This decomposition will guide discovery, prioritization, and solution design. Task & Instructions [đ DESCRIBE THE USER TASK OR PROBLEM đđŒ] Use JTBD thinking to uncover: The main functional job the user is trying to get done; Related emotional or social jobs; Sub-jobs or tasks users must complete along the way; Forces of progress and barriers that influence behavior. Checkpoints Before finalizing, check yourself: Are the jobs clearly goal-oriented and not solution-oriented? Are sub-jobs specific steps toward the main job? Are emotional/social jobs captured? Are user struggles or unmet needs listed? If anythingâs missing or unclear, revise and explain what was added or changed. With a simple one-sentence prompt, youâll likely get a high-level list of user needs or feature ideas. An advanced approach can produce a structured JTBD breakdown of a specific user problem, which may include: Main Functional Job: A clear, goal-oriented statement describing the primary outcome the user wants to achieve. Emotional & Social Jobs: Supporting jobs related to how the user wants to feel or be perceived during their progress. Sub-Jobs: Step-by-step tasks or milestones the user must complete to fulfill the main job. Forces of Progress: A breakdown of motivations (push/pull) and barriers (habits/anxieties) that influence user behavior. But these prompts are most powerful when used with real context. Try it now with your product. Even a quick test can reveal unexpected insights. 2. Competitive UX Audit Technique: Attachments + Reasoning Before Understanding + Tree of Thought (ToT) Sometimes, you donât need to design something new â you need to understand what already exists. Whether youâre doing a competitive analysis, learning from rivals, or benchmarking features, the first challenge is making sense of someone elseâs design choices. Whatâs the feature really for? Whoâs it helping? Why was it built this way? Instead of rushing into critique, we can use GPT to reverse-engineer the thinking behind a product â before judging it. In this case, start by: Grabbing the competitorâs documentation for the feature you want to analyze. Save it as a PDF. Then head over to ChatGPT (or other models). Before jumping into the audit, ask it to first make sense of the documentation. This technique is called Reasoning Before Understanding (RBU). That means before you ask for critique, you ask for interpretation. This helps AI build a more accurate mental model â and avoids jumping to conclusions. Role You are a senior UX strategist and cognitive design analyst. Your expertise lies in interpreting digital product features based on minimal initial context, inferring purpose, user intent, and mental models behind design decisions before conducting any evaluative critique. Context Youâve been given internal documentation and screenshots of a feature. The goal is not to evaluate it yet, but to understand what itâs doing, for whom, and why. Task & Instructions Review the materials and answer: What is this feature for? Who is the intended user? What tasks or scenarios does it support? What assumptions does it make about the user? What does its structure suggest about priorities or constraints? Once you get the first reply, take a moment to respond: clarify, correct, or add nuance to GPTâs conclusions. This helps align the modelâs mental frame with your own. For the audit part, weâll use something called the Tree of Thought (ToT) approach. Tree of Thought (ToT) is a prompting strategy that asks the AI to âthink in branches.â Instead of jumping to a single answer, the model explores multiple reasoning paths, compares outcomes, and revises logic before concluding â like tracing different routes through a decision tree. This makes it perfect for handling more complex UX tasks. You are now performing a UX audit based on your understanding of the feature. Youâll identify potential problems, alternative design paths, and trade-offs using a Tree of Thought approach, i.e., thinking in branches, comparing different reasoning paths before concluding. or Convert your understanding of the feature into a set of Jobs-To-Be-Done statements from the userâs perspective using a Tree of Thought approach. List implicit assumptions this feature makes about the user's behavior, workflow, or context using a Tree of Thought approach. Propose alternative versions of this feature that solve the same job using different interaction or flow mechanics using a Tree of Thought approach. 3. Ideation With An Intellectual Opponent Technique: Role Conditioning + Memory Update When youâre working on creative or strategic problems, thereâs a common trap: AI often just agrees with you or tries to please your way of thinking. It treats your ideas like gospel and tells you theyâre great â even when theyâre not. So how do you avoid this? How do you get GPT to challenge your assumptions and act more like a critical thinking partner? Simple: tell it to and ask to remember. Instructions From now on, remember to follow this mode unless I explicitly say otherwise. Do not take my conclusions at face value. Your role is not to agree or assist blindly, but to serve as a sharp, respectful intellectual opponent. Every time I present an idea, do the following: Interrogate my assumptions: What am I taking for granted? Present counter-arguments: Where could I be wrong, misled, or overly confident? Test my logic: Is the reasoning sound, or are there gaps, fallacies, or biases? Offer alternatives: Not for the sake of disagreement, but to expand perspective. Prioritize truth and clarity over consensus: Even when itâs uncomfortable. Maintain a constructive, rigorous, truth-seeking tone. Donât argue for the sake of it. Argue to sharpen thought, expose blind spots, and help me reach clearer, stronger conclusions. This isnât a debate. Itâs a collaboration aimed at insight. 4. Requirements For Concepting Technique: Requirement-Oriented + Meta prompting This one deserves a whole article on its own, but letâs lay the groundwork here. When youâre building quick prototypes or UI screens using tools like v0, Bolt, Lovable, UX Pilot, etc., your prompt needs to be better than most PRDs youâve worked with. Why? Because the output depends entirely on how clearly and specifically you describe the goal. The catch? Writing that kind of prompt is hard. So instead of jumping straight to the design prompt, try writing a meta-prompt first. That is a prompt that asks GPT to help you write a better prompt. Prompting about prompting, prompt-ception, if you will. Hereâs how to make that work: Feed GPT what you already know about the app or the screen. Then ask it to treat things like information architecture, layout, and user flow as variables it can play with. That way, you donât just get one rigid idea â you get multiple concept directions to explore. Role You are a product design strategist working with AI to explore early-stage design concepts. Goal Generate 3 distinct prompt variations for designing a Daily Wellness Summary single screen in a mobile wellness tracking app for Lovable/Bolt/v0. Each variation should experiment with a different Information Architecture and Layout Strategy. You donât need to fully specify the IA or layout â just take a different angle in each prompt. For example, one may prioritize user state, another may prioritize habits or recommendations, and one may use a card layout while another uses a scroll feed. User context The target user is a busy professional who checks this screen once or twice a day (morning/evening) to log their mood, energy, and sleep quality, and to receive small nudges or summaries from the app. Visual style Keep the tone calm and approachable. Format Each of the 3 prompt variations should be structured clearly and independently. Remember: The key difference between the three prompts should be the underlying IA and layout logic. You donât need to over-explain â just guide the design generator toward different interpretations of the same user need. 5. From Cognitive Walkthrough To Testing Hypothesis Technique: Casual Tree of Though + Casual Reasoning + Multi-Roles + Self-Reflection Cognitive walkthrough is a powerful way to break down a user action and check whether the steps are intuitive. Example: âUser wants to add a taskâ â Do they know where to click? What to do next? Do they know it worked? Weâve found this technique super useful for reviewing our own designs. Sometimes thereâs already a mockup; other times weâre still arguing with a PM about what should go where. Either way, GPT can help. Hereâs an advanced way to run that process: Context Youâve been given a screenshot of a screen where users can create new tasks in a project management app. The main action the user wants to perform is âadd a taskâ. Simulate behavior from two user types: a beginner with no prior experience and a returning user familiar with similar tools. Task & Instructions Go through the UI step by step and evaluate: Will the user know what to do at each step? Will they understand how to perform the action? Will they know theyâve succeeded? For each step, consider alternative user paths (if multiple interpretations of the UI exist). Use a casual Tree-of-Thought method. At each step, reflect: what assumptions is the user making here? What visual feedback would help reduce uncertainty? Format Use a numbered list for each step. For each, add observations, possible confusions, and UX suggestions. Limits Donât assume prior knowledge unless itâs visually implied. Do not limit analysis to a single user type. Cognitive walkthroughs are great, but they get even more useful when they lead to testable hypotheses. After running the walkthrough, youâll usually uncover moments that might confuse users. Instead of leaving that as a guess, turn those into concrete UX testing hypotheses. We ask GPT to not only flag potential friction points, but to help define how weâd validate them with real users: using a task, a question, or observable behavior. Task & Instructions Based on your previous cognitive walkthrough: Extract all potential usability hypotheses from the walkthrough. For each hypothesis: Assess whether it can be tested through moderated or unmoderated usability testing. Explain what specific UX decision or design element may cause this issue. Use causal reasoning. For testable hypotheses: Propose a specific usability task or question. Define a clear validation criterion (how youâll know if the hypothesis is confirmed or disproved). Evaluate feasibility and signal strength of the test (e.g., how easy it is to test, and how confidently it can validate the hypothesis). Assign a priority score based on Impact, Confidence, and Ease (ICE). Limits Donât invent hypotheses not rooted in your walkthrough output. Only propose tests where user behavior or responses can provide meaningful validation. Skip purely technical or backend concerns. 6. Cross-Functional Feedback Technique: Multi-Roles Good design is co-created. And good designers are used to working with cross-functional teams: PMs, engineers, analysts, QAs, you name it. Part of the job is turning scattered feedback into clear action items. Earlier, we talked about how giving AI a âroleâ helps sharpen its responses. Now letâs level that up: what if we give it multiple roles at once? This is called multi-role prompting. Itâs a great way to simulate a design review with input from different perspectives. You get quick insights and a more well-rounded critique of your design. Role You are a cross-functional team of experts evaluating a new dashboard design: PM (focus: user value & prioritization) Engineer (focus: feasibility & edge cases) QA tester (focus: clarity & testability) Data analyst (focus: metrics & clarity of reporting) Designer (focus: consistency & usability) Context The team is reviewing a mockup for a new analytics dashboard for internal use. Task & Instructions For each role: What stands out immediately? What concerns might this role have? What feedback or suggestions would they give? Designing With AI Is A Skill, Not A Shortcut By now, youâve seen that prompting isnât just about typing better instructions. Itâs about designing better thinking. Weâve explored several techniques, and each is useful in different contexts: Technique When to use It Role + Context + Instructions + Constraints Anytime you want consistent, focused responses (especially in research, decomposition, and analysis). Checkpoints / Self-verification When accuracy, structure, or layered reasoning matters. Great for complex planning or JTBD breakdowns. Reasoning Before Understanding (RBU) When input materials are large or ambiguous (like docs or screenshots). Helps reduce misinterpretation. Tree of Thought (ToT) When you want the model to explore options, backtrack, compare. Ideal for audits, evaluations, or divergent thinking. Meta-prompting When you're not sure how to even ask the right question. Use it early in fuzzy or creative concepting. Multi-role prompting When you need well-rounded, cross-functional critique or to simulate team feedback. Memory-updated âopponentâ prompting When you want to challenge your own logic, uncover blind spots, or push beyond echo chambers. But even the best techniques wonât matter if you use them blindly, so ask yourself: Do I need precision or perspective right now? Precision? Try Role + Checkpoints for clarity and control. Perspective? Use Multi-Role or Tree of Thought to explore alternatives. Should the model reflect my framing, or break it? Reflect it? Use Role + Context + Instructions. Break it? Try Opponent prompting to challenge assumptions. Am I trying to reduce ambiguity, or surface complexity? Reduce ambiguity? Use Meta-prompting to clarify your ask. Surface complexity? Go with ToT or RBU to expose hidden layers. Is this task about alignment, or exploration? Alignment? Use Multi-Roles prompting to simulate consensus. Exploration? Use Cognitive Walkthrough to push deeper. Remember, you donât need a long prompt every time. Use detail when the task demands it, not out of habit. AI can do a lot, but it reflects the shape of your thinking. And prompting is how you shape it. So donât just prompt better. Think better. And design with AI â not around it.
The Power Of The <code>Intl</code> API: A Definitive Guide To Browser-Native Internationalization
Internationalization isnât just translation. Itâs about formatting dates, pluralizing words, sorting names, and more, all according to specific locales. Instead of relying on heavy third-party libraries, modern JavaScript offers the Intl API â a powerful, native way to handle i18n. A quiet reminder that the web truly is worldwide.
Itâs a common misconception that internationalization (i18n) is simply about translating text. While crucial, translation is merely one facet. One of the complexities lies in adapting information for diverse cultural expectations: How do you display a date in Japan versus Germany? Whatâs the correct way to pluralize an item in Arabic versus English? How do you sort a list of names in various languages? Many developers have relied on weighty third-party libraries or, worse, custom-built formatting functions to tackle these challenges. These solutions, while functional, often come with significant overhead: increased bundle size, potential performance bottlenecks, and the constant struggle to keep up with evolving linguistic rules and locale data. Enter the ECMAScript Internationalization API, more commonly known as the Intl object. This silent powerhouse, built directly into modern JavaScript environments, is an often-underestimated, yet incredibly potent, native, performant, and standards-compliant solution for handling data internationalization. Itâs a testament to the webâs commitment to being worldwide, providing a unified and efficient way to format numbers, dates, lists, and more, according to specific locales. Intl And Locales: More Than Just Language Codes At the heart of Intl lies the concept of a locale. A locale is far more than just a two-letter language code (like en for English or es for Spanish). It encapsulates the complete context needed to present information appropriately for a specific cultural group. This includes: Language: The primary linguistic medium (e.g., en, es, fr). Script: The script (e.g., Latn for Latin, Cyrl for Cyrillic). For example, zh-Hans for Simplified Chinese, vs. zh-Hant for Traditional Chinese. Region: The geographic area (e.g., US for United States, GB for Great Britain, DE for Germany). This is crucial for variations within the same language, such as en-US vs. en-GB, which differ in date, time, and number formatting. Preferences/Variants: Further specific cultural or linguistic preferences. See âChoosing a Language Tagâ from W3C for more information. Typically, youâll want to choose the locale according to the language of the web page. This can be determined from the lang attribute: // Get the page's language from the HTML lang attribute const pageLocale = document.documentElement.lang || 'en-US'; // Fallback to 'en-US' Occasionally, you may want to override the page locale with a specific locale, such as when displaying content in multiple languages: // Force a specific locale regardless of page language const tutorialFormatter = new Intl.NumberFormat('zh-CN', { style: 'currency', currency: 'CNY' }); console.log(Chinese example: ${tutorialFormatter.format(199.99)}); // Output: „199.99 In some cases, you might want to use the userâs preferred language: // Use the user's preferred language const browserLocale = navigator.language || 'ja-JP'; const formatter = new Intl.NumberFormat(browserLocale, { style: 'currency', currency: 'JPY' }); When you instantiate an Intl formatter, you can optionally pass one or more locale strings. The API will then select the most appropriate locale based on availability and preference. Core Formatting Powerhouses The Intl object exposes several constructors, each for a specific formatting task. Letâs delve into the most frequently used ones, along with some powerful, often-overlooked gems. 1. Intl.DateTimeFormat: Dates and Times, Globally Formatting dates and times is a classic i18n problem. Should it be MM/DD/YYYY or DD.MM.YYYY? Should the month be a number or a full word? Intl.DateTimeFormat handles all this with ease. const date = new Date(2025, 6, 27, 14, 30, 0); // June 27, 2025, 2:30 PM // Specific locale and options (e.g., long date, short time) const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric', hour: 'numeric', minute: 'numeric', timeZoneName: 'shortOffset' // e.g., "GMT+8" }; console.log(new Intl.DateTimeFormat('en-US', options).format(date)); // "Friday, June 27, 2025 at 2:30 PM GMT+8" console.log(new Intl.DateTimeFormat('de-DE', options).format(date)); // "Freitag, 27. Juni 2025 um 14:30 GMT+8" // Using dateStyle and timeStyle for common patterns console.log(new Intl.DateTimeFormat('en-GB', { dateStyle: 'full', timeStyle: 'short' }).format(date)); // "Friday 27 June 2025 at 14:30" console.log(new Intl.DateTimeFormat('ja-JP', { dateStyle: 'long', timeStyle: 'short' }).format(date)); // "2025ćčŽ6æ27æ„ 14:30" The flexibility of options for DateTimeFormat is vast, allowing control over year, month, day, weekday, hour, minute, second, time zone, and more. 2. Intl.NumberFormat: Numbers With Cultural Nuance Beyond simple decimal places, numbers require careful handling: thousands separators, decimal markers, currency symbols, and percentage signs vary wildly across locales. const price = 123456.789; // Currency formatting console.log(new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' }).format(price)); // "$123,456.79" (auto-rounds) console.log(new Intl.NumberFormat('de-DE', { style: 'currency', currency: 'EUR' }).format(price)); // "123.456,79 âŹ" // Units console.log(new Intl.NumberFormat('en-US', { style: 'unit', unit: 'meter', unitDisplay: 'long' }).format(100)); // "100 meters" console.log(new Intl.NumberFormat('fr-FR', { style: 'unit', unit: 'kilogram', unitDisplay: 'short' }).format(5.5)); // "5,5 kg" Options like minimumFractionDigits, maximumFractionDigits, and notation (e.g., scientific, compact) provide even finer control. 3. Intl.ListFormat: Natural Language Lists Presenting lists of items is surprisingly tricky. English uses âandâ for conjunction and âorâ for disjunction. Many languages have different conjunctions, and some require specific punctuation. This API simplifies a task that would otherwise require complex conditional logic: const items = ['apples', 'oranges', 'bananas']; // Conjunction ("and") list console.log(new Intl.ListFormat('en-US', { type: 'conjunction' }).format(items)); // "apples, oranges, and bananas" console.log(new Intl.ListFormat('de-DE', { type: 'conjunction' }).format(items)); // "Ăpfel, Orangen und Bananen" // Disjunction ("or") list console.log(new Intl.ListFormat('en-US', { type: 'disjunction' }).format(items)); // "apples, oranges, or bananas" console.log(new Intl.ListFormat('fr-FR', { type: 'disjunction' }).format(items)); // "apples, oranges ou bananas" 4. Intl.RelativeTimeFormat: Human-Friendly Timestamps Displaying â2 days agoâ or âin 3 monthsâ is common in UI, but localizing these phrases accurately requires extensive data. Intl.RelativeTimeFormat automates this. const rtf = new Intl.RelativeTimeFormat('en-US', { numeric: 'auto' }); console.log(rtf.format(-1, 'day')); // "yesterday" console.log(rtf.format(1, 'day')); // "tomorrow" console.log(rtf.format(-7, 'day')); // "7 days ago" console.log(rtf.format(3, 'month')); // "in 3 months" console.log(rtf.format(-2, 'year')); // "2 years ago" // French example: const frRtf = new Intl.RelativeTimeFormat('fr-FR', { numeric: 'auto', style: 'long' }); console.log(frRtf.format(-1, 'day')); // "hier" console.log(frRtf.format(1, 'day')); // "demain" console.log(frRtf.format(-7, 'day')); // "il y a 7 jours" console.log(frRtf.format(3, 'month')); // "dans 3 mois" The numeric: 'always' option would force â1 day agoâ instead of âyesterdayâ. 5. Intl.PluralRules: Mastering Pluralization This is arguably one of the most critical aspects of i18n. Different languages have vastly different pluralization rules (e.g., English has singular/plural, Arabic has zero, one, two, many...). Intl.PluralRules allows you to determine the âplural categoryâ for a given number in a specific locale. const prEn = new Intl.PluralRules('en-US'); console.log(prEn.select(0)); // "other" (for "0 items") console.log(prEn.select(1)); // "one" (for "1 item") console.log(prEn.select(2)); // "other" (for "2 items") const prAr = new Intl.PluralRules('ar-EG'); console.log(prAr.select(0)); // "zero" console.log(prAr.select(1)); // "one" console.log(prAr.select(2)); // "two" console.log(prAr.select(10)); // "few" console.log(prAr.select(100)); // "other" This API doesnât pluralize text directly, but it provides the essential classification needed to select the correct translation string from your message bundles. For example, if you have message keys like item.one, item.other, youâd use pr.select(count) to pick the right one. 6. Intl.DisplayNames: Localized Names For Everything Need to display the name of a language, a region, or a script in the userâs preferred language? Intl.DisplayNames is your comprehensive solution. // Display language names in English const langNamesEn = new Intl.DisplayNames(['en'], { type: 'language' }); console.log(langNamesEn.of('fr')); // "French" console.log(langNamesEn.of('es-MX')); // "Mexican Spanish" // Display language names in French const langNamesFr = new Intl.DisplayNames(['fr'], { type: 'language' }); console.log(langNamesFr.of('en')); // "anglais" console.log(langNamesFr.of('zh-Hans')); // "chinois (simplifiĂ©)" // Display region names const regionNamesEn = new Intl.DisplayNames(['en'], { type: 'region' }); console.log(regionNamesEn.of('US')); // "United States" console.log(regionNamesEn.of('DE')); // "Germany" // Display script names const scriptNamesEn = new Intl.DisplayNames(['en'], { type: 'script' }); console.log(scriptNamesEn.of('Latn')); // "Latin" console.log(scriptNamesEn.of('Arab')); // "Arabic" With Intl.DisplayNames, you avoid hardcoding countless mappings for language names, regions, or scripts, keeping your application robust and lean. Browser Support You might be wondering about browser compatibility. The good news is that Intl has excellent support across modern browsers. All major browsers (Chrome, Firefox, Safari, Edge) fully support the core functionality discussed (DateTimeFormat, NumberFormat, ListFormat, RelativeTimeFormat, PluralRules, DisplayNames). You can confidently use these APIs without polyfills for the majority of your user base. Conclusion: Embrace The Global Web With Intl The Intl API is a cornerstone of modern web development for a global audience. It empowers front-end developers to deliver highly localized user experiences with minimal effort, leveraging the browserâs built-in, optimized capabilities. By adopting Intl, you reduce dependencies, shrink bundle sizes, and improve performance, all while ensuring your application respects and adapts to the diverse linguistic and cultural expectations of users worldwide. Stop wrestling with custom formatting logic and embrace this standards-compliant tool! Itâs important to remember that Intl handles the formatting of data. While incredibly powerful, it doesnât solve every aspect of internationalization. Content translation, bidirectional text (RTL/LTR), locale-specific typography, and deep cultural nuances beyond data formatting still require careful consideration. (I may write about these in the future!) However, for presenting dynamic data accurately and intuitively, Intl is the browser-native answer. Further Reading & Resources MDN Web Docs: Intl namespace object Intl.DateTimeFormat Intl.NumberFormat Intl.ListFormat Intl.RelativeTimeFormat Intl.PluralRules Intl.DisplayNames ECMAScript Internationalization API Specification: The official ECMA-402 Standard Choosing a Language Tag
Automating Design Systems: Tips And Resources For Getting Started
Design systems are more than style guides: theyâre made up of workflows, tokens, components, and documentation â all the stuff teams rely on to build consistent products. As projects grow, keeping everything in sync gets tricky fast. In this article, weâll look at how smart tooling, combined with automation where it makes sense, can speed things up, reduce errors, and help your team focus on design over maintenance.
A design system is more than just a set of colors and buttons. Itâs a shared language that helps designers and developers build good products together. At its core, a design system includes tokens (like colors, spacing, fonts), components (such as buttons, forms, navigation), plus the rules and documentation that tie all together across projects. If youâve ever used systems like Google Material Design or Shopify Polaris, for example, then youâve seen how design systems set clear expectations for structure and behavior, making teamwork smoother and faster. But while design systems promote consistency, keeping everything in sync is the hard part. Update a token in Figma, like a color or spacing value, and that change has to show up in the code, the documentation, and everywhere else itâs used. The same thing goes for components: when a buttonâs behavior changes, it needs to update across the whole system. Thatâs where the right tools and a bit of automation can make the difference. They help reduce repetitive work and keep the system easier to manage as it grows. In this article, weâll cover a variety of tools and techniques for syncing tokens, updating components, and keeping docs up to date, showing how automation can make all of it easier. The Building Blocks Of Automation Letâs start with the basics. Color, typography, spacing, radii, shadows, and all the tiny values that make up your visual language are known as design tokens, and theyâre meant to be the single source of truth for the UI. Youâll see them in design software like Figma, in code, in style guides, and in documentation. Smashing Magazine has covered them before in great detail. The problem is that they often go out of sync, such as when a color or component changes in design but doesnât get updated in the code. The more your team grows or changes, the more these mismatches show up; not because people arenât paying attention, but because manual syncing just doesnât scale. Thatâs why automating tokens is usually the first thing teams should consider doing when they start building a design system. That way, instead of writing the same color value in Figma and then again in a configuration file, you pull from a shared token source and let that drive both design and development. There are a few tools that are designed to help make this easier. Token Studio Token Studio is a Figma plugin that lets you manage design tokens directly in your file, export them to different formats, and sync them to code. Specify Specify lets you collect tokens from Figma and push them to different targets, including GitHub repositories, continuous integration pipelines, documentation, and more. NameDesignTokens.guide NamedDesignTokens.guide helps with naming conventions, which is honestly a common pain point, especially when youâre working with a large number of tokens. Once your tokens are set and connected, youâll spend way less time fixing inconsistencies. It also gives you a solid base to scale, whether thatâs adding themes, switching brands, or even building systems for multiple products. Thatâs also when naming really starts to count. If your tokens or components arenât clearly named, things can get confusing quickly. Note: Vitaly Friedmanâs âHow to Name Thingsâ is worth checking out if youâre working with larger systems. From there, itâs all about components. Tokens define the values, but components are what people actually use, e.g., buttons, inputs, cards, dropdowns â you name it. In a perfect setup, you build a component once and reuse it everywhere. But without structure, itâs easy for things to âdriftâ out of scope. Itâs easy to end up with five versions of the same button, and whatâs in code doesnât match whatâs in Figma, for example. Automation doesnât replace design, but rather, it connects everything to one source. The Figma component matches the one in production, the documentation updates when the component changes, and the whole team is pulling from the same library instead of rebuilding their own version. This is where real collaboration happens. Here are a few tools that help make that happen: Tool What It Does UXPin Merge Lets you design using real code components. What you prototype is what gets built. Supernova Helps you publish a design system, sync design and code sources, and keep documentation up-to-date. Zeroheight Turns your Figma components into a central, browsable, and documented system for your whole team. How Does Everything Connect? A lot of the work starts right inside your design application. Once your tokens and components are in place, tools like Supernova help you take it further by extracting design data, syncing it across platforms, and generating production-ready code. You donât need to write custom scripts or use the Figma API to get value from automation; these tools handle most of it for you. But for teams that want full control, Figma does offer an API. It lets you do things like the following: Pull token values (like colors, spacing, typography) directly from Figma files, Track changes to components and variants, Tead metadata (like style names, structure, or usage patterns), and Map which components are used where in the design. The Figma API is REST-based, so it works well with custom scripts and automations. You donât need a huge setup, just the right pieces. On the development side, teams usually use Node.js or Python to handle automation. For example: Fetch styles from Figma. Convert them into JSON. Push the values to a design token repo or directly into the codebase. You wonât need that level of setup for most use cases, but itâs helpful to know itâs there if your team outgrows no-code tools. Where do your tokens and components come from? How do updates happen? What tools keep everything connected? The workflow becomes easier to manage once thatâs clear, and you spend less time trying to fix changes or mismatches. When tokens, components, and documentation stay in sync, your team moves faster and spends less time fixing the same issues. Extracting Design Data Figma is a collaborative design tool used to create UIs: buttons, layouts, styles, components, everything that makes up the visual language of the product. Itâs also where all your design data lives, which includes the tokens we talked about earlier. This data is what weâll extract and eventually connect to your codebase. But first, youâll need a setup. To follow along: Go to figma.com and create a free account. Download the Figma desktop app if you prefer working locally, but keep an eye on system requirements if youâre on an older device. Once youâre in, youâll see a home screen that looks something like the following: From here, itâs time to set up your design tokens. You can either create everything from scratch or use a template from the Figma community to save time. Templates are a great option if you donât want to build everything yourself. But if you prefer full control, creating your setup totally works too. There are other ways to get tokens as well. For example, a site like namedesigntokens.guide lets you generate and download tokens in formats like JSON. The only catch is that Figma doesnât let you import JSON directly, so if you go that route, youâll need to bring in a middle tool like Specify to bridge that gap. It helps sync tokens between Figma, GitHub, and other places. For this article, though, weâll keep it simple and stick with Figma. Pick any design system template from the Figma community to get started; there are plenty to choose from. Depending on the template you choose, youâll get a pre-defined set of tokens that includes colors, typography, spacing, components, and more. These templates come in all types: website, e-commerce, portfolio, app UI kits, you name it. For this article, weâll be using the /Design-System-Template--Community because it includes most of the tokens youâll need right out of the box. But feel free to pick a different one if you want to try something else. Once youâve picked your template, itâs time to download the tokens. Weâll use Supernova, a tool that connects directly to your Figma file and pulls out design tokens, styles, and components. It makes the design-to-code process a lot smoother. Step 1: Sign Up on Supernova Go to supernova.io and create an account. Once youâre in, youâll land on a dashboard that looks like this: Step 2: Connect Your Figma File To pull in the tokens, head over to the Data Sources section in Supernova and choose Figma from the list of available sources. (Youâll also see other options like Storybook or Figma variables, but weâre focusing on Figma.) Next, click on Connect a new file, paste the link to your Figma template, and click Import. Supernova will load the full design system from your template. From your dashboard, youâll now be able to see all the tokens. Turning Tokens Into Code Design tokens are great inside Figma, but the real value shows when you turn them into code. Thatâs how the developers on your team actually get to use them. Hereâs the problem: Many teams default to copying values manually for things like color, spacing, and typography. But when you make a change to them in Figma, the code is instantly out of sync. Thatâs why automating this process is such a big win. Instead of rewriting the same theme setup for every project, you generate it, constantly translating designs into dev-ready assets, and keep everything in sync from one source of truth. Now that weâve got all our tokens in Supernova, letâs turn them into code. First, go to the Code Automation tab, then click New Pipeline. Youâll see different options depending on what you want to generate: React Native, CSS-in-JS, Flutter, Godot, and a few others. Letâs go with the CSS-in-JS option for the sake of demonstration: After that, youâll land on a setup screen with three sections: Data, Configuration, and Delivery. Data Here, you can pick a theme. At first, it might only give you âBlackâ as the option; you can select that or leave it empty. It really doesnât matter for the time being. Configuration This is where you control how the code is structured. I picked PascalCase for how token names are formatted. You can also update how things like spacing, colors, or font styles are grouped and saved. Delivery This is where you choose how you want the output delivered. I chose âBuild Onlyâ, which builds the code for you to download. Once youâre done, click Save. The pipeline is created, and youâll see it listed in your dashboard. From here, you can download your token code, which is already generated. Automating Documentation So, whatâs the point of documentation in a design system? You can think of it as the instruction manual for your team. It explains what each token or component is, why it exists, and how to use it. Designers, developers, and anyone else on your team can stay on the same page â no guessing, no back-and-forth. Just clear context. Letâs continue from where we stopped. Supernova is capable of handling your documentation. Head over to the Documentation tab. This is where you can start editing everything about your design system docs, all from the same place. You can: Add descriptions to your tokens, Define what each base token is for (as well as what itâs not for), Organize sections by colors, typography, spacing, or components, and Drop in images, code snippets, or examples. Youâre building the documentation inside the same tool where your tokens live. In other words, thereâs no jumping between tools and no additional setup. Thatâs where the automation kicks in. You edit once, and your docs stay synced with your design source. It all stays in one environment. Once youâre done, click Publish and you will be presented with a new window asking you to sign in. After that, youâre able to access your live documentation site. Practical Tips For Automations Automation is great. It saves hours of manual work and keeps your design system tight across design and code. The trick is knowing when to automate and how to make sure it keeps working over time. You donât need to automate everything right away. But if youâre doing the same thing over and over again, thatâs a kind of red flag. A few signs that itâs time to consider using automation: Youâre using the same styles across multiple platforms (like web and mobile). You have a shared design system used by more than one team. Design tokens change often, and you want updates to flow into code automatically. Youâre tired of manual updates every time the brand team tweaks a color. There are three steps you need to consider. Letâs look at each one. Step 1: Keep An Eye On Tools And API Updates If your pipeline depends on design tools, like Figma, or platforms, like Supernova, youâll want to know when changes are made and evaluate how they impact your work, because even small updates can quietly affect your exports. Itâs a good idea to check Figmaâs API changelog now and then, especially if something feels off with your token syncing. They often update how variables and components are structured, and that can impact your pipeline. Thereâs also an RSS feed for product updates. The same goes for Supernovaâs product updates. They regularly roll out improvements that might tweak how your tokens are handled or exported. If youâre using open-source tools like Style Dictionary, keeping an eye on the GitHub repo (particularly the Issues tab) can save you from debugging weird token name changes later. All of this isnât about staying glued to release notes, but having a system to check if something suddenly stops working. That way, youâll catch things before they reach production. Step 2: Break Your Pipeline Into Smaller Steps A common trap teams fall into is trying to automate everything in one big run: colors, spacing, themes, components, and docs, all processed in a single click. It sounds convenient, but itâs hard to maintain, and even harder to debug. Itâs much more manageable to split your automation into pieces. For example, having a single workflow that handles your core design tokens (e.g., colors, spacing, and font sizes), another for theme variations (e.g., light and dark themes), and one more for component mapping (e.g., buttons, inputs, and cards). This way, if your team changes how spacing tokens are named in Figma, you only need to update one part of the workflow, not the entire system. Itâs also easier to test and reuse smaller steps. Step 3: Test The Output Every Time Even if everything runs fine, always take a moment to check the exported output. It doesnât need to be complicated. A few key things: Are the token names clean and readable? If you see something like PrimaryColorColorText, thatâs a red flag. Did anything disappear or get renamed unexpectedly? It happens more often than you think, especially with typography or spacing tokens after design changes. Does the UI still work? If youâre using something like Tailwind, CSS variables, or custom themes, double-check that the new token values arenât breaking anything in the design or build process. To catch issues early, it helps to run tools like ESLint or Stylelint right after the pipeline completes. Theyâll flag odd syntax or naming problems before things get shipped. How AI Can Help Once your automation is stable, thereâs a next layer that can boost your workflow: AI. Itâs not just for writing code or generating mockups, but for helping with the small, repetitive things that eat up time in design systems. When used right, AI can assist without replacing your control over the system. Hereâs where it might fit into your workflow: Naming Suggestions When youâre dealing with hundreds of tokens, naming them clearly and consistently is a real challenge. Some AI tools can help by suggesting clean, readable names for your tokens or components based on patterns in your design. Itâs not perfect, but itâs a good way to kickstart naming, especially for large teams. Pattern Recognition AI can also spot repeated styles or usage patterns across your design files. If multiple buttons or cards share similar spacing, shadows, or typography, tools powered by AI can group or suggest components for systemization even before a human notices. Automated Documentation Instead of writing everything from scratch, AI can generate first drafts of documentation based on your tokens, styles, and usage. You still need to review and refine, but it takes away the blank-page problem and saves hours. Here are a few tools that already bring AI into the design and development space in practical ways: Uizard: Uizard uses AI to turn wireframes into designs automatically. You can sketch something by hand, and it transforms that into a usable mockup. Anima: Anima can convert Figma designs into responsive React code. It also helps fill in real content or layout structures, making it a powerful bridge between design and development, with some AI assistance under the hood. Builder.io: Builder uses AI to help generate and edit components visually. It's especially useful for marketers or non-developers who need to build pages fast. AI helps streamline layout, content blocks, and design rules. Conclusion This article is not about achieving complete automation in the technical sense, but more about using smart tools to streamline the menial and manual aspects of working with design systems. Exporting tokens, generating docs, and syncing design with code can be automated, making your process quicker and more reliable with the right setup. Instead of rebuilding everything from scratch every time, you now have a way to keep things consistent, stay organized, and save time. Further Reading âDesign System Guideâ by Romina Kavcic âDesign System In 90 Daysâ by Vitaly Friedman
UX Job Interview Helpers
Talking points. Smart questions. A compelling story. This guide helps you prepare for your UX job interview. And remember: no act of kindness, however small, is ever wasted.
When talking about job interviews for a UX position, we often discuss how to leave an incredible impression and how to negotiate the right salary. But itâs only one part of the story. The other part is to be prepared, to ask questions, and to listen carefully. Below, Iâve put together a few useful resources on UX job interviews â from job boards to Notion templates and practical guides. I hope you or your colleagues will find it helpful. The Design Interview Kit As you are preparing for that interview, get ready with the Design Interview Kit (Figma), a helpful practical guide that covers how to craft case studies, solve design challenges, write cover letters, present your portfolio, and negotiate your offer. Kindly shared by Oliver Engel. The Product Designerâs (Job) Interview Playbook (PDF) The Product Designerâs (Job) Interview Playbook (PDF) is a practical little guide for designers through each interview phase, with helpful tips and strategies on things to keep in mind, talking points, questions to ask, red flags to watch out for and how to tell a compelling story about yourself and your work. Kindly put together by Meghan Logan. From my side, I can only wholeheartedly recommend to not only speak about your design process. Tell stories about the impact that your design work has produced. Frame your design work as an enabler of business goals and user needs. And include insights about the impact youâve produced â on business goals, processes, team culture, planning, estimates, and testing. Also, be very clear about the position that you are applying for. In many companies, titles do matter. There are vast differences in responsibilities and salaries between various levels for designers, so if you see yourself as a senior, review whether it actually reflects in the position. A Guide To Successful UX Job Interviews (+ Notion template) Catt Smallâs Guide To Successful UX Job Interviews, a wonderful practical series on how to build a referral pipeline, apply for an opening, prepare for screening and interviews, present your work, and manage salary expectations. You can also download a Notion template. 30 Useful Questions To Ask In UX Job Interviews In her wonderful article, Nati Asher has suggested many useful questions to ask in a job interview when you are applying as a UX candidate. Iâve taken the liberty of revising some of them and added a few more questions that might be worth considering for your next job interview. What are the biggest challenges the team faces at the moment? What are the teamâs main strengths and weaknesses? What are the traits and skills that will make me successful in this position? Where is the company going in the next 5 years? What are the achievements I should aim for over the first 90 days? What would make you think âIâm so happy we hired X!â? Do you have any doubts or concerns regarding my fit for this position? Does the team have any budget for education, research, etc.? What is the process of onboarding in the team? Who is in the team, and how long have they been in that team? Who are the main stakeholders I will work with on a day-to-day basis? Which options do you have for user research and accessing users or data? Are there analytics, recordings, or other data sources to review? How do you measure the impact of design work in your company? To what extent does management understand the ROI of good UX? How does UX contribute strategically to the companyâs success? Who has the final say on design, and who decides what gets shipped? What part of the design process does the team spend most time on? How many projects do designers work on simultaneously? How has the organization overcome challenges with remote work? Do we have a design system, and in what state is it currently? Why does a company want to hire a UX designer? How would you describe the ideal candidate for this position? What does a career path look like for this role? How will my performance be evaluated in this role? How long do projects take to launch? Can you give me some examples? What are the most immediate projects that need to be addressed? How do you see the design team growing in the future? What traits make someone successful in this team? Whatâs the most challenging part of leading the design team? How does the company ensure itâs upholding its values? Before a job interview, have your questions ready. Not only will they convey a message that you care about the process and the culture, but also that you understand what is required to be successful. And this fine detail might go a long way. Donât Forget About The STAR Method Interviewers closer to business will expect you to present examples of your work using the STAR method (Situation â Task â Action â Result), and might be utterly confused if you delve into all the fine details of your ideation process or the choice of UX methods youâve used. Situation: Set the scene and give necessary details. Task: Explain your responsibilities in that situation. Action: Explain what steps you took to address it. Result: Share the outcomes your actions achieved. As Meghan suggests, the interview is all about how your skills add value to the problem the company is currently solving. So ask about the current problems and tasks. Interview the person who interviews you, too â but also explain who you are, your focus areas, your passion points, and how you and your expertise would fit in a product and in the organization. Wrapping Up A final note on my end: never take a rejection personally. Very often, the reasons you are given for rejection are only a small part of a much larger picture â and have almost nothing to do with you. It might be that a job description wasnât quite accurate, or the company is undergoing restructuring, or the finances are too tight after all. Donât despair and keep going. Write down your expectations. Job titles matter: be deliberate about them and your level of seniority. Prepare good references. Have your questions ready for that job interview. As Catt Small says, âonce you have a foot in the door, youâve got to kick it wide openâ. You are a bright shining star â donât you ever forget that. Job Boards Remote + In-person IXDA Who Is Still Hiring? UXPA Job Bank Otta Boooom Black Creatives Job Board UX Research Jobs UX Content Jobs UX Content Collective Jobs UX Writing Jobs Useful Resources âHow To Be Prepared For UX Job Interviews,â by yours truly âUX Job Search Strategies and Templates,â by yours truly âHow To Ace Your Next Job Interview,â by Startup.jobs âCracking The UX Job Interview,â by Artiom Dashinsky âThe Product Design Interview Process,â by Tanner Christensen â10 Questions To Ask in a UX Interview,â by Ryan Scott âSix questions to ask after a UX designer job interview,â by Nick Babich Meet âSmart Interface Design Patternsâ You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects â with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables â with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off. Meet Smart Interface Design Patterns, our video course on interface design & UX. Video + UX Training Video only Video + UX Training $ 495.00 $ 699.00 Get Video + UX Training 25 video lessons (15h) + Live UX Training. 100 days money-back-guarantee. Video only $ 300.00$ 395.00 Get the video course 40 video lessons (15h). Updated yearly. Also available as a UX Bundle with 2 video courses.
Stories Of August (2025 Wallpapers Edition)
Do you need a little inspiration boost? Well, then our new batch of desktop wallpapers might be for you. The wallpapers are designed with love by the community for the community and can be downloaded for free! Enjoy!
Everybody loves a beautiful wallpaper to freshen up their desktops and home screens, right? To cater for new and unique designs on a regular basis, we started our monthly wallpapers series more than 14 years ago, and from the very beginning to today, artists and designers from across the globe have accepted the challenge and submitted their artworks. This month is no exception, of course. In this post, youâll find desktop wallpapers for August 2025, along with a selection of timeless designs from our archives that are bound to make your August extra colorful. A big thank you to everyone who tickled their creativity and shared their wallpapers with us this month â this post wouldnât exist without your kind support! Now, if youâre feeling inspired after browsing this collection, why not submit a wallpaper to get featured in one of our upcoming posts? Fire up your favorite design tool, grab your camera or pen and paper, and tell the story you want to tell. We canât wait to see what youâll come up with! Happy August! You can click on every image to see a larger preview. We respect and carefully consider the ideas and motivation behind each and every artistâs work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werenât anyhow influenced by us but rather designed from scratch by the artists themselves. August Afloat âSet sail into a serene summer moment with this bright and breezy wallpaper. A wooden boat drifts gently across wavy blue waters dotted with lily pads, capturing the stillness and simplicity of late August days.â â Designed by Libra Fire from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Dive Into Summer Mode âWhen your phone becomes a pool and your pupâs living the dream â itâs a playful reminder that sometimes the best escapes are simple: unplug, slow down, soak in the sunshine, and let your imagination do the swimming.â â Designed by PopArt Studio from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Sea Shanties And Ears In The Wind âAugust is like a boat cruise swaying with the rhythm of sea shanties. Our mascot really likes to have its muzzle caressed by the salty sea wind and getting its ears warmed by the summer sun.â â Designed by Caroline Boire from France. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Queen Of August âAugust 8 is International Cat Day, so of course the month belongs to her majesty. Confident, calm, and totally in charge. Just like every cat ever.â â Designed by Ginger IT Solutions from Serbia. preview with calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Happiness Happens In August âMany people find August one of the happiest months of the year because of holidays. You can spend days sunbathing, swimming, birdwatching, listening to their joyful chirping, and indulging in sheer summer bliss. August 8th is also known as the Happiness Happens Day, so make it worthwhile.â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Nostalgia âAugust, the final breath of summer, brings with it a wistful nostalgia for a season not yet past.â â Designed by Ami Totorean from Romania. preview without calendar: 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Relax In Bora Bora âAs we have taken a liking to diving through the coral reefs, weâll also spend August diving and took the leap to Bora Bora. There we enjoy the sea and nature and above all, we rest to gain strength for the new course that is to come.â â Designed by Veronica Valenzuela from Spain. preview without calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440 Banana! Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Summer Day Designed by Kasturi Palmal from India. preview without calendar: 800x600, 1280x1024, 1600x1200, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Retro Road Trip âAs the sun dips below the horizon, casting a warm glow upon the open road, the retro van finds a resting place for the night. A campsite bathed in moonlight or a cozy motel straight from a postcard become havens where weary travelers can rest, rejuvenate, and prepare for the adventures that await with the dawn of a new day.â â Designed by PopArt Studio from Serbia. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Spooky Campfire Stories Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Bee Happy! âAugust means that fall is just around the corner, so I designed this wallpaper to remind everyone to âbee happyâ even though summer is almost over. Sweeter things are ahead!â â Designed by Emily Haines from the United States. preview without calendar: 640x480, 800x600, 1280x720, 1280x800, 1280x960, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Oh La La⊠Parisâ Night âI like the Paris night! All is very bright!â â Designed by VerĂłnica Valenzuela from Spain. preview without calendar: 800x480, 1024x768, 1152x864, 1280x800, 1280x960, 1440x900, 1680x1200, 1920x1080, 2560x1440 Cowabunga Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Childhood Memories Designed by Francesco Paratici from Australia. preview without calendar: 320x480, 1024x768, 1024x1024, 1280x800, 1280x1024, 1366x768, 1440x900, 1680x1050, 1920x1080, 1920x1200, 2560x1440 Summer Nap Designed by Dorvan Davoudi from Canada. preview without calendar: 800x480, 800x600, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Live In The Moment âMy dog Sami inspired me for this one. He lives in the moment and enjoys every second with a big smile on his face. I wish we could learn to enjoy life like he does! Happy August everyone!â â Designed by Westie Vibes from Portugal. preview without calendar: 320x480, 1024x1024, 1080x1920, 1680x1200, 1920x1080, 2560x1440 Handwritten August âI love typography handwritten style.â â Designed by Chalermkiat Oncharoen from Thailand. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Psst, Itâs Camping Time⊠âAugust is one of my favorite months, when the nights are long and deep and crackling fire makes you think of many things at once and nothing at all at the same time. Itâs about heat and cold which allow you to touch the eternity for a few moments.â â Designed by Igor Izhik from Canada. preview without calendar: 1024x768, 1024x1024, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Hello Again âIn Melbourne it is the last month of quite a cool winter so we are looking forward to some warmer days to come.â â Designed by Tazi from Australia. preview without calendar: 320x480, 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x960, 1600x1200, 1920x1080, 1920x1440, 2560x1440 Coffee Break Time Designed by Ricardo Gimenes from Spain. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Subtle August Chamomiles âOur designers wanted to create something summery, but not very colorful, something more subtle. The first thing that came to mind was chamomile because there are a lot of them in Ukraine and their smell is associated with a summer field.â â Designed by MasterBundles from Ukraine. preview without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 Party Night Under The Stars âAugust⊠itâs time for a party and summer vacation â sea, moon, stars, music⊠and magical vibrant colors.â â Designed by Teodora Vasileva from Bulgaria. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1280x720, 1280x960, 1280x1024, 1400x1050, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440 A Bloom Of Jellyfish âI love going to aquariums â the colors, patterns, and array of blue hues attract the nature lover in me while still appeasing my design eye. One of the highlights is always the jellyfish tanks. They usually have some kind of light show in them, which makes the jellyfish fade from an intense magenta to a deep purple â and it literally tickles me pink. We discovered that the collective noun for jellyfish is a bloom and, well, it was love-at-first-collective-noun all over again. Iâve used some intense colors to warm up your desktop and hopefully transport you into the depths of your own aquarium.â â Designed by Wonderland Collective from South Africa. preview without calendar: 320x480, 800x600, 1024x768, 1280x960, 1680x1050, 1920x1200, 2560x1440 Colorful Summer ââAlways keep mint on your windowsill in August, to ensure that the buzzing flies will stay outside where they belong. Donât think summer is over, even when roses droop and turn brown and the stars shift position in the sky. Never presume August is a safe or reliable time of the year.â (Alice Hoffman)â â Designed by LĂvi from Hungary. preview without calendar: 800x480, 1024x768, 1280x720, 1280x1024, 1400x1050, 1680x1050, 1680x1200, 1920x1200, 2560x1440, 3475x4633 Searching For Higgs Boson Designed by Vlad Gerasimov from Georgia. preview without calendar: 800x600, 960x600, 1024x768, 1152x864, 1229x768, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1440x900, 1440x960, 1600x1200, 1600x1200, 1680x1050, 1728x1080, 1920x1200, 1920x1440, 2304x1440, 2560x1600 Freak Show Vol. 1 Designed by Ricardo Gimenes from Spain. preview without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160 Grow Where You Are Planted âEvery experience is a building block on your own life journey, so try to make the most of where you are in life and get the most out of each day.â â Designed by Tazi Design from Australia. preview without calendar: 320x480, 640x480, 800x600, 1024x768, 1152x864, 1280x720, 1280x960, 1600x1200, 1920x1080, 1920x1440, 2560x1440 Chill Out âSummer is in full swing and Chicago is feeling the heat! Take some time to chill out!â â Designed by Denise Johnson from Chicago. preview without calendar: 1024x768, 1280x800, 1280x1024, 1440x900, 1600x1200, 1920x1200 Estonian Summer Sun âThis is a moment from Southern Estonia that shows amazing summer nights.â Designed by Erkki Pung from Estonia. preview without calendar: 320x480, 1024x1024, 1280x800, 1440x900, 1920x1200
The Core Model: Start FROM The Answer, Not WITH The Solution
The Core Model is a practical methodology that flips traditional digital development on its head. Instead of starting with solutions or structure, we begin with a hypothesis about what users need and follow a simple framework that brings diverse teams together to create more effective digital experiences. By asking six good questions in the right order, teams align around user tasks and business objectives, creating clarity that transcends organizational boundaries.
Ever sat in a meeting where everyone jumped straight to solutions? âWe need a new app!â âLetâs redesign the homepage!â âAI will fix everything!â This solution-first thinking is endemic in digital development â and itâs why so many projects fail to deliver real value. As the creator of the Core Model methodology, I developed this approach to flip the script: instead of starting with solutions, we start FROM the answer. Whatâs the difference? Starting with solutions means imposing our preconceived ideas. Starting FROM the answer to a user task means forming a hypothesis about what users need, then taking a step back to follow a simple structure that validates and refines that hypothesis. Six Good Questions That Lead to Better Answers At its heart, the Core Model is simply six good questions asked in the right order, with a seventh that drives action. It appeals to common sense â something often in short supply during complex digital projects. When I introduced this approach to a large organization struggling with their website, their head of digital admitted: âWeâve been asking all these questions separately, but never in this structured way that connects them.â These questions help teams pause, align around what matters, and create solutions that actually work: Who are we trying to help, and whatâs their situation? What are they trying to accomplish? What do we want to achieve? How do they approach this need? Where should they go next? Whatâs the essential content or functionality they need? What needs to be done to create this solution? This simple framework creates clarity across team boundaries, bringing together content creators, designers, developers, customer service, subject matter experts, and leadership around a shared understanding. Starting With a Hypothesis The Core Model process typically begins before the workshop. The project lead or facilitator works with key stakeholders to: Identify candidate cores based on organizational priorities and user needs. Gather existing user insights and business objectives. Form initial hypotheses about what these cores should accomplish. Prepare relevant background materials for workshop participants. This preparation ensures the workshop itself is focused and productive, with teams validating and refining hypotheses rather than starting from scratch. The Core Model: Six Elements That Create Alignment Letâs explore each element of the Core Model in detail: 1. Target Group: Building Empathy First Rather than detailed personas, the Core Model starts with quick proto-personas that build empathy for users in specific situations: A parent researching childcare options late at night after a long day. A small business owner trying to understand tax requirements between client meetings. A new resident navigating unfamiliar public services in their second language. The key is to humanize users and understand their emotional and practical context before diving into solutions. 2. User Tasks: What People Are Actually Trying to Do Beyond features or content, what are users actually trying to accomplish? Making an informed decision about a major purchase. Finding the right form to apply for a service. Understanding next steps in a complex process. Checking eligibility for a program or benefit. These tasks should be based on user research and drive everything that follows. Top task methodology is a great approach to this. 3. Business Objectives: What Success Looks Like Every digital initiative should connect to clear organizational goals: Increasing online self-service adoption. Reducing support costs. Improving satisfaction and loyalty. Meeting compliance requirements. Generating leads or sales. These objectives provide the measurement framework for success. (If you work with OKRs, you can think of these as Key Results that connect to your overall Objective.) 4. Inward Paths: User Scenarios and Approaches This element goes beyond just findability to include the userâs entire approach and mental model: What scenarios lead them to this need? What terminology do they use to describe their problem? How would the phrase their need to Google or an LLM? What emotions or urgency are they experiencing? What channels or touchpoints do they use? What existing knowledge do they bring? Understanding these angles of different approaches ensures we meet users where they are. 5. Forward Paths: Guiding the Journey What should users do after engaging with this core? Take a specific action to continue their task. Explore related information or options. Connect with appropriate support channels. Save or share their progress. These paths create coherent journeys (core flows) rather than dead ends. 6. Core Content: The Essential Solution Only after mapping the previous elements do we define the actual solution: What information must be included? What functionality is essential? What tone and language are appropriate? What format best serves the need? This becomes our blueprint for what actually needs to be created. Action Cards: From Insight to Implementation The Core Model process culminates with action cards that answer the crucial seventh question: âWhat needs to be done to create this solution?â These cards typically include: Specific actions required; Who is responsible; Timeline for completion; Resources needed; Dependencies and constraints. Action cards transform insights into concrete next steps, ensuring the workshop leads to real improvements rather than just interesting discussions. The Power of Core Pairs A unique aspect of the Core Model methodology is working in core pairsâtwo people from different competencies or departments working together on the same core sheet. This approach creates several benefits: Cross-disciplinary insight Pairing someone with deep subject knowledge with someone who brings a fresh perspective. Built-in quality control Partners catch blind spots and challenge assumptions. Simplified communication One-to-one dialogue is more effective than group discussions. Shared ownership Both participants develop a commitment to the solution. Knowledge transfer Skills and insights flow naturally between disciplines. The ideal pair combines different perspectives â content and design, business and technical, expert and novice â creating a balanced approach that neither could achieve alone. Creating Alignment Within and Between Teams The Core Model excels at creating two crucial types of alignment: Within Cross-Functional Teams Modern teams bring together diverse competencies: Content creators focus on messages and narrative. Designers think about user experience and interfaces. Developers consider technical implementation. Business stakeholders prioritize organizational needs. The Core Model gives these specialists a common framework. Instead of the designer focusing only on interfaces or the developer only on code, everyone aligns around user tasks and business goals. As one UX designer told me: âThe Core Model changed our team dynamic completely. Instead of handing off wireframes to developers who didnât understand the âwhyâ behind design decisions, we now share a common understanding of what weâre trying to accomplish.â Between Teams Across the Customer Journey Users donât experience your organization in silos â they move across touchpoints and teams. The Core Model helps connect these experiences: Marketing teams understand how their campaigns connect to service delivery. Product teams see how their features fit into larger user journeys. Support teams gain context on user pathways and common issues. Content teams create information that supports the entire journey. By mapping connections between cores (core flows), organizations create coherent experiences rather than fragmented interactions. Breaking Down Organizational Barriers The Core Model creates a neutral framework where various perspectives can contribute while maintaining a unified direction. This is particularly valuable in traditional organizational structures where content responsibility is distributed across departments. The Workshop: Making It Happen The Core Model workshop brings these elements together in a practical format that can be adapted to different contexts and needs. Workshop Format and Timing For complex projects with multiple stakeholders across organizational silos, the ideal format is a full-day (6âhour) workshop: First Hour: Foundation and Context Introduction to the methodology (15 min). Sharing user insights and business context (15 min). Reviewing pre-workshop hypotheses (15 min). Initial discussion and questions (15 min). Hours 2â4: Core Mapping Core pairs work on mapping elements (120 min). Sharing between core pairs and in plenary between elements. Facilitators provide guidance as needed. Hours 5â6: Presentation, Discussion, and Action Planning Each core pair presents its findings (depending on the number of cores). Extensive group discussion and refinement. Creating action cards and next steps. The format is highly flexible: Teams experienced with the methodology can conduct focused sessions in as little as 30 minutes. Smaller projects might need only 2â3 hours. Remote teams might split the workshop into multiple shorter sessions. Workshop Environment The Core Model workshop thrives in different environments: Analog: Traditional approach using paper core sheets. Digital: Virtual workshops using Miro, Mural, FigJam, or similar platforms. Hybrid: Digital canvas in physical workshop, combining in-person interaction with digital documentation. Note: You can find all downloads and templates here. Core Pairs: The Key to Success The composition of core pairs is critical to success: One person should know the solution domain well (subject matter expert). The other brings a fresh perspective (and learns about a different domain). This combination ensures both depth of knowledge and fresh thinking. Cross-functional pairing creates natural knowledge transfer and breaks down silos. Workshop Deliverables Important to note: The workshop doesnât produce final solutions. Instead, it creates a comprehensive brief containing the following: Priorities and context for content development. Direction and ideas for design and user experience. Requirements and specifications for functionality. Action plan for implementation with clear ownership. This brief becomes the foundation for subsequent development work, ensuring everyone builds toward the same goal while leaving room for specialist expertise during implementation. Getting Started: Your First Core Model Implementation Ready to apply the Core Model in your organization? Hereâs how to begin: 1. Form Your Initial Hypothesis Before bringing everyone together: Identify a core where users struggle and the business impact is clear. Gather available user insights and business objectives. Form a hypothesis about what this core should accomplish. Identify key stakeholders across relevant departments. 2. Bring Together the Right Core Pairs Select participants who represent different perspectives: Content creators paired with designers. Business experts paired with technical specialists. Subject matter experts paired with user advocates. Veterans paired with fresh perspectives. 3. Follow the Seven Questions Guide core pairs through the process: Who are we trying to help, and whatâs their situation? What are they trying to accomplish? What do we want to achieve? How do they approach this need? Where should they go next? Whatâs the essential content or functionality? What needs to be done to create this solution? 4. Create an Action Plan Transform insights into concrete actions: Document specific next steps on action cards. Assign clear ownership for each action. Establish timeline and milestones. Define how youâll measure success. In Conclusion: Common Sense In A Structured Framework The Core Model works because it combines common sense with structure â asking the right questions in the right order to ensure we address what actually matters. By starting FROM the answer, not WITH the solution, teams avoid premature problem-solving and create digital experiences that truly serve user needs while achieving organizational goals. Whether youâre managing a traditional website, creating multi-channel content, or developing digital products, this methodology provides a framework for better collaboration, clearer priorities, and more effective outcomes. This article is a short adaptation of my book The Core Model â A Common Sense to Digital Strategy and Design. You can find information about the book and updated resources at thecoremodel.com.
Web Components: Working With Shadow DOM
Web Components are more than just Custom Elements. Shadow DOM, HTML Templates, and Custom Elements each play a role. In this article, Russell Beswick demonstrates how Shadow DOM fits into the broader picture, explaining why it matters, when to use it, and how to apply it effectively.
Itâs common to see Web Components directly compared to framework components. But most examples are actually specific to Custom Elements, which is one piece of the Web Components picture. Itâs easy to forget Web Components are actually a set of individual Web Platform APIs that can be used on their own: Custom Elements HTML Templates Shadow DOM In other words, itâs possible to create a Custom Element without using Shadow DOM or HTML Templates, but combining these features opens up enhanced stability, reusability, maintainability, and security. Theyâre all parts of the same feature set that can be used separately or together. With that being said, I want to pay particular attention to Shadow DOM and where it fits into this picture. Working with Shadow DOM allows us to define clear boundaries between the various parts of our web applications â encapsulating related HTML and CSS inside a DocumentFragment to isolate components, prevent conflicts, and maintain clean separation of concerns. How you take advantage of that encapsulation involves trade-offs and a variety of approaches. In this article, weâll explore those nuances in depth, and in a follow-up piece, weâll dive into how to work effectively with encapsulated styles. Why Shadow DOM Exists Most modern web applications are built from an assortment of libraries and components from a variety of providers. With the traditional (or âlightâ) DOM, itâs easy for styles and scripts to leak into or collide with each other. If you are using a framework, you might be able to trust that everything has been written to work seamlessly together, but effort must still be made to ensure that all elements have a unique ID and that CSS rules are scoped as specifically as possible. This can lead to overly verbose code that both increases app load time and reduces maintainability. <!-- div soup --> <div id="my-custom-app-framework-landingpage-header" class="my-custom-app-framework-foo"> <div><div><div><div><div><div>etc...</div></div></div></div></div></div> </div> Shadow DOM was introduced to solve these problems by providing a way to isolate each component. The <video> and <details> elements are good examples of native HTML elements that use Shadow DOM internally by default to prevent interference from global styles or scripts. Harnessing this hidden power that drives native browser components is what really sets Web Components apart from their framework counterparts. Elements That Can Host A Shadow Root Most often, you will see shadow roots associated with Custom Elements. However, they can also be used with any HTMLUnknownElement, and many standard elements support them as well, including: <aside> <blockquote> <body> <div><footer> <h1> to <h6> <header> <main> <nav> <p> <section> <span> Each element can only have one shadow root. Some elements, including <input> and <select>, already have a built-in shadow root that is not accessible through scripting. You can inspect them with your Developer Tools by enabling the Show User Agent Shadow DOM setting, which is âoffâ by default. Creating A Shadow Root Before leveraging the benefits of Shadow DOM, you first need to establish a shadow root on an element. This can be instantiated imperatively or declaratively. Imperative Instantiation To create a shadow root using JavaScript, use attachShadow({ mode }) on an element. The mode can be open (allowing access via element.shadowRoot) or closed (hiding the shadow root from outside scripts). const host = document.createElement('div'); const shadow = host.attachShadow({ mode: 'open' }); shadow.innerHTML = '<p>Hello from the Shadow DOM!</p>'; document.body.appendChild(host); In this example, weâve established an open shadow root. This means that the elementâs content is accessible from the outside, and we can query it like any other DOM node: host.shadowRoot.querySelector('p'); // selects the paragraph element If we want to prevent external scripts from accessing our internal structure entirely, we can set the mode to closed instead. This causes the elementâs shadowRoot property to return null. We can still access it from our shadow reference in the scope where we created it. shadow.querySelector('p'); This is a crucial security feature. With a closed shadow root, we can be confident that malicious actors cannot extract private user data from our components. For example, consider a widget that shows banking information. Perhaps it contains the userâs account number. With an open shadow root, any script on the page can drill into our component and parse its contents. In closed mode, only the user can perform this kind of action with manual copy-pasting or by inspecting the element. I suggest a closed-first approach when working with Shadow DOM. Make a habit of using closed mode unless you are debugging, or only when absolutely necessary to get around a real-world limitation that cannot be avoided. If you follow this approach, you will find that the instances where open mode is actually required are few and far between. Declarative Instantiation We donât have to use JavaScript to take advantage of Shadow DOM. Registering a shadow root can be done declaratively. Nesting a <template> with a shadowrootmode attribute inside any supported element will cause the browser to automatically upgrade that element with a shadow root. Attaching a shadow root in this manner can even be done with JavaScript disabled. <my-widget> <template shadowrootmode="closed"> <p> Declarative Shadow DOM content </p> </template> </my-widget> Again, this can be either open or closed. Consider the security implications before using open mode, but note that you cannot access the closed mode content through any scripts unless this method is used with a registered Custom Element, in which case, you can use ElementInternals to access the automatically attached shadow root: class MyWidget extends HTMLElement { #internals; #shadowRoot; constructor() { super(); this.#internals = this.attachInternals(); this.#shadowRoot = this.#internals.shadowRoot; } connectedCallback() { const p = this.#shadowRoot.querySelector('p') console.log(p.textContent); // this works } }; customElements.define('my-widget', MyWidget); export { MyWidget }; Shadow DOM Configuration There are three other options besides mode that we can pass to Element.attachShadow(). Option 1: clonable:true Until recently, if a standard element had a shadow root attached and you tried to clone it using Node.cloneNode(true) or document.importNode(node,true), you would only get a shallow copy of the host element without the shadow root content. The examples we just looked at would actually return an empty <div>. This was never an issue with Custom Elements that built their own shadow root internally. But for a declarative Shadow DOM, this means that each element needs its own template, and they cannot be reused. With this newly-added feature, we can selectively clone components when itâs desirable: <div id="original"> <template shadowrootmode="closed" shadowrootclonable> <p> This is a test </p> </template> </div> <script> const original = document.getElementById('original'); const copy = original.cloneNode(true); copy.id = 'copy'; document.body.append(copy); // includes the shadow root content </script> Option 2: serializable:true Enabling this option allows you to save a string representation of the content inside an elementâs shadow root. Calling Element.getHTML() on a host element will return a template copy of the Shadow DOMâs current state, including all nested instances of shadowrootserializable. This can be used to inject a copy of your shadow root into another host, or cache it for later use. In Chrome, this actually works through a closed shadow root, so be careful of accidentally leaking user data with this feature. A safer alternative would be to use a closed wrapper to shield the inner contents from external influences while still keeping things open internally: <wrapper-element></wrapper-element> <script> class WrapperElement extends HTMLElement { #shadow; constructor() { super(); this.#shadow = this.attachShadow({ mode:'closed' }); this.#shadow.setHTMLUnsafe(<nested-element> <template shadowrootmode="open" shadowrootserializable> <div id="test"> <template shadowrootmode="open" shadowrootserializable> <p> Deep Shadow DOM Content </p> </template> </div> </template> </nested-element>); this.cloneContent(); } cloneContent() { const nested = this.#shadow.querySelector('nested-element'); const snapshot = nested.getHTML({ serializableShadowRoots: true }); const temp = document.createElement('div'); temp.setHTMLUnsafe(<another-element>${snapshot}</another-element>); const copy = temp.querySelector('another-element'); copy.shadowRoot.querySelector('#test').shadowRoot.querySelector('p').textContent = 'Changed Content!'; this.#shadow.append(copy); } } customElements.define('wrapper-element', WrapperElement); const wrapper = document.querySelector('wrapper-element'); const test = wrapper.getHTML({ serializableShadowRoots: true }); console.log(test); // empty string due to closed shadow root </script> Notice setHTMLUnsafe(). Thatâs there because the content contains <template> elements. This method must be called when injecting trusted content of this nature. Inserting the template using innerHTML would not trigger the automatic initialization into a shadow root. Option 3: delegatesFocus:true This option essentially makes our host element act as a <label> for its internal content. When enabled, clicking anywhere on the host or calling .focus() on it will move the cursor to the first focusable element in the shadow root. This will also apply the :focus pseudo-class to the host, which is especially useful when creating components that are intended to participate in forms. <custom-input> <template shadowrootmode="closed" shadowrootdelegatesfocus> <fieldset> <legend> Custom Input </legend> <p> Click anywhere on this element to focus the input </p> <input type="text" placeholder="Enter some text..."> </fieldset> </template> </custom-input> This example only demonstrates focus delegation. One of the oddities of encapsulation is that form submissions are not automatically connected. That means an inputâs value will not be in the form submission by default. Form validation and states are also not communicated out of the Shadow DOM. There are similar connectivity issues with accessibility, where the shadow root boundary can interfere with ARIA. These are all considerations specific to forms that we can address with ElementInternals, which is a topic for another article, and is cause to question whether you can rely on a light DOM form instead. Slotted Content So far, we have only looked at fully encapsulated components. A key Shadow DOM feature is using slots to selectively inject content into the componentâs internal structure. Each shadow root can have one default (unnamed) <slot>; all others must be named. Naming a slot allows us to provide content to fill specific parts of our component as well as fallback content to fill any slots that are omitted by the user: <my-widget> <template shadowrootmode="closed"> <h2><slot name="title"><span>Fallback Title</span></slot></h2> <slot name="description"><p>A placeholder description.</p></slot> <ol><slot></slot></ol> </template> <span slot="title"> A Slotted Title</span> <p slot="description">An example of using slots to fill parts of a component.</p> <li>Foo</li> <li>Bar</li> <li>Baz</li> </my-widget> Default slots also support fallback content, but any stray text nodes will fill them. As a result, this only works if you collapse all whitespace in the host elementâs markup: <my-widget><template shadowrootmode="closed"> <slot><span>Fallback Content</span></slot> </template></my-widget> Slot elements emit slotchange events when their assignedNodes() are added or removed. These events do not contain a reference to the slot or the nodes, so you will need to pass those into your event handler: class SlottedWidget extends HTMLElement { #internals; #shadow; constructor() { super(); this.#internals = this.attachInternals(); this.#shadow = this.#internals.shadowRoot; this.configureSlots(); } configureSlots() { const slots = this.#shadow.querySelectorAll('slot'); console.log({ slots }); slots.forEach(slot => { slot.addEventListener('slotchange', () => { console.log({ changedSlot: slot.name || 'default', assignedNodes: slot.assignedNodes() }); }); }); } } customElements.define('slotted-widget', SlottedWidget); Multiple elements can be assigned to a single slot, either declaratively with the slot attribute or through scripting: const widget = document.querySelector('slotted-widget'); const added = document.createElement('p'); added.textContent = 'A secondary paragraph added using a named slot.'; added.slot = 'description'; widget.append(added); Notice that the paragraph in this example is appended to the host element. Slotted content actually belongs to the âlightâ DOM, not the Shadow DOM. Unlike the examples weâve covered so far, these elements can be queried directly from the document object: const widgetTitle = document.querySelector('my-widget [slot=title]'); widgetTitle.textContent = 'A Different Title'; If you want to access these elements internally from your class definition, use this.children or this.querySelector. Only the <slot> elements themselves can be queried through the Shadow DOM, not their content. From Mystery To Mastery Now you know why you would want to use Shadow DOM, when you should incorporate it into your work, and how you can use it right now. But your Web Components journey canât end here. Weâve only covered markup and scripting in this article. We have not even touched on another major aspect of Web Components: Style encapsulation. That will be our topic in another article.
Designing Better UX For Left-Handed People
Today, roughly 10% of people are left-handed. Yet most products â digital and physical â arenât designed with it in mind. Letâs change that. More design patterns in Smart Interface Design Patterns, a **friendly video course on UX** and design patterns by Vitaly.
Many products â digital and physical â are focused on âaverageâ users â a statistical representation of the user base, which often overlooks or dismisses anything that deviates from that average, or happens to be an edge case. But people are never edge cases, and âaverageâ users donât really exist. We must be deliberate and intentional to ensure that our products reflect that. Today, roughly 10% of people are left-handed. Yet most products â digital and physical â arenât designed with them in mind. And there is rarely a conversation about how a particular digital experience would work better for their needs. So how would it adapt, and what are the issues we should keep in mind? Well, letâs explore what it means for us. This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns đŁ â with live UX training coming up soon. Jump to table of contents. Left-Handedness â âLeft-Onlyâ Itâs easy to assume that left-handed people are usually left-handed users. However, thatâs not necessarily the case. Because most products are designed with right-handed use in mind, many left-handed people have to use their right hand to navigate the physical world. From very early childhood, left-handed people have to rely on their right hand to use tools and appliances like scissors, openers, fridges, and so on. Thatâs why left-handed people tend to be ambidextrous, sometimes using different hands for different tasks, and sometimes using different hands for the same tasks interchangeably. However, only 1% of people use both hands equally well (ambidextrous). In the same way, right-handed people arenât necessarily right-handed users. Itâs common to be using a mobile device in both left and right hands, or both, perhaps with a preference for one. But when it comes to writing, a preference is stronger. Challenges For Left-Handed Users Because left-handed users are in the minority, there is less demand for left-handed products, and so typically they are more expensive, and also more difficult to find. Troubles often emerge with seemingly simple tools â scissors, can openers, musical instruments, rulers, microwaves and bank pens. For example, most scissors are designed with the top blade positioned for right-handed use, which makes cutting difficult and less precise. And in microwaves, buttons and interfaces are nearly always on the right, making left-handed use more difficult. Now, with digital products, most left-handed people tend to adapt to right-handed tools, which they use daily. Unsurprisingly, many use their right hand to navigate the mouse. However, itâs often quite different on mobile where the left hand is often preferred. Donât make design decisions based on left/right-handedness. Allow customizations based on the userâs personal preferences. Allow users to re-order columns (incl. the Actions column). In forms, place action buttons next to the last userâs interaction. Keyboard accessibility helps everyone move faster (Esc). Usability Guidelines To Support Both Hands As Ruben Babu writes, we shouldnât design a fire extinguisher that canât be used by both hands. Think pull up and pull down, rather than swipe left or right. Minimize the distance to travel with the mouse. And when in doubt, align to the center. Bottom left â better for lefties, bottom right â for righties. With magnifiers, users canât spot right-aligned buttons. On desktop, align buttons to the left/middle, not right. On mobile, most people switch both hands when tapping. Key actions â put in middle half to two-thirds of the screen. A simple way to test the mobile UI is by trying to use the opposite-handed UX test. For key flows, we try to complete them with your non-dominant hand and use the opposite hand to discover UX shortcomings. For physical products, you might try the oil test. It might be more effective than you might think. Good UX Works For Both Our aim isnât to degrade the UX of right-handed users by meeting the needs of left-handed users. The aim is to create an accessible experience for everyone. Providing a better experience for left-handed people also benefits right-handed people who have a temporary arm disability. And thatâs an often-repeated but also often-overlooked universal principle of usability: better accessibility is better for everyone, even if it might feel that it doesnât benefit you directly at the moment. Useful Resources âDiscover Hidden UX Flaws With the Opposite-Handed UX Test,â by Jeff Huang âRight-Aligned Buttons Arenât More Efficient For Right-Handed People,â by Julia Y. âMobile Accessibility Target Sizes Cheatsheet,â by Vitaly Friedman âWhy The World Is Not Designed For Left-Handed People,â by Elvis Hsiao âUsability For Left Handedness 101â, by Ruben Babu Touch Design For Mobile Interfaces, by Steven Hoober Meet âSmart Interface Design Patternsâ You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects â with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables â with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off. Meet Smart Interface Design Patterns, our video course on interface design & UX. Video + UX Training Video only Video + UX Training $ 495.00 $ 699.00 Get Video + UX Training 25 video lessons (15h) + Live UX Training. 100 days money-back-guarantee. Video only $ 300.00$ 395.00 Get the video course 40 video lessons (15h). Updated yearly. Also available as a UX Bundle with 2 video courses.
Handling JavaScript Event Listeners With Parameters
Event listeners are essential for interactivity in JavaScript, but they can quietly cause memory leaks if not removed properly. And what if your event listener needs parameters? Thatâs where things get interesting. Amejimaobari Ollornwi shares which JavaScript features make handling parameters with event handlers both possible and well-supported.
JavaScript event listeners are very important, as they exist in almost every web application that requires interactivity. As common as they are, it is also essential for them to be managed properly. Improperly managed event listeners can lead to memory leaks and can sometimes cause performance issues in extreme cases. Hereâs the real problem: JavaScript event listeners are often not removed after they are added. And when they are added, they do not require parameters most of the time â except in rare cases, which makes them a little trickier to handle. A common scenario where you may need to use parameters with event handlers is when you have a dynamic list of tasks, where each task in the list has a âDeleteâ button attached to an event handler that uses the taskâs ID as a parameter to remove the task. In a situation like this, it is a good idea to remove the event listener once the task has been completed to ensure that the deleted element can be successfully cleaned up, a process known as garbage collection. A Common Mistake When Adding Event Listeners A very common mistake when adding parameters to event handlers is calling the function with its parameters inside the addEventListener() method. This is what I mean: button.addEventListener('click', myFunction(param1, param2)); The browser responds to this line by immediately calling the function, irrespective of whether or not the click event has happened. In other words, the function is invoked right away instead of being deferred, so it never fires when the click event actually occurs. You may also receive the following console error in some cases: This error makes sense because the second parameter of the addEventListener method can only accept a JavaScript function, an object with a handleEvent() method, or simply null. A quick and easy way to avoid this error is by changing the second parameter of the addEventListener method to an arrow or anonymous function. button.addEventListener('click', (event) => { myFunction(event, param1, param2); // Runs on click }); The only hiccup with using arrow and anonymous functions is that they cannot be removed with the traditional removeEventListener() method; you will have to make use of AbortController, which may be overkill for simple cases. AbortController shines when you have multiple event listeners to remove at once. For simple cases where you have just one or two event listeners to remove, the removeEventListener() method still proves useful. However, in order to make use of it, youâll need to store your function as a reference to the listener. Using Parameters With Event Handlers There are several ways to include parameters with event handlers. However, for the purpose of this demonstration, we are going to constrain our focus to the following two: Option 1: Arrow And Anonymous Functions Using arrow and anonymous functions is the fastest and easiest way to get the job done. To add an event handler with parameters using arrow and anonymous functions, weâll first need to call the function weâre going to create inside the arrow function attached to the event listener: const button = document.querySelector("#myButton"); button.addEventListener("click", (event) => { handleClick(event, "hello", "world"); }); After that, we can create the function with parameters: function handleClick(event, param1, param2) { console.log(param1, param2, event.type, event.target); } Note that with this method, removing the event listener requires the AbortController. To remove the event listener, we create a new AbortController object and then retrieve the AbortSignal object from it: const controller = new AbortController(); const { signal } = controller; Next, we can pass the signal from the controller as an option in the removeEventListener() method: button.addEventListener("click", (event) => { handleClick(event, "hello", "world"); }, { signal }); Now we can remove the event listener by calling AbortController.abort(): controller.abort() Option 2: Closures Closures in JavaScript are another feature that can help us with event handlers. Remember the mistake that produced a type error? That mistake can also be corrected with closures. Specifically, with closures, a function can access variables from its outer scope. In other words, we can access the parameters we need in the event handler from the outer function: function createHandler(message, number) { // Event handler return function (event) { console.log(${message} ${number} - Clicked element:, event.target); }; } const button = document.querySelector("#myButton"); button.addEventListener("click", createHandler("Hello, world!", 1)); } This establishes a function that returns another function. The function that is created is then called as the second parameter in the addEventListener() method so that the inner function is returned as the event handler. And with the power of closures, the parameters from the outer function will be made available for use in the inner function. Notice how the event object is made available to the inner function. This is because the inner function is what is being attached as the event handler. The event object is passed to the function automatically because itâs the event handler. To remove the event listener, we can use the AbortController like we did before. However, this time, letâs see how we can do that using the removeEventListener() method instead. In order for the removeEventListener method to work, a reference to the createHandler function needs to be stored and used in the addEventListener method: function createHandler(message, number) { return function (event) { console.log(${message} ${number} - Clicked element:, event.target); }; } const handler = createHandler("Hello, world!", 1); button.addEventListener("click", handler); Now, the event listener can be removed like this: button.removeEventListener("click", handler); Conclusion It is good practice to always remove event listeners whenever they are no longer needed to prevent memory leaks. Most times, event handlers do not require parameters; however, in rare cases, they do. Using JavaScript features like closures, AbortController, and removeEventListener, handling parameters with event handlers is both possible and well-supported.
Why Non-Native Content Designers Improve Global UX
Ensuring your product communicates clearly to a global audience is not just about localisation. Even for products that have a proper localisation process, English often remains the default language for UI and communications. This article focuses on how you can make English content clear and inclusive for non-native users. Oleksii offers a practical guide based on his own experience as a non-native English-speaking content designer, defining the user experience for international companies.
A few years ago, I was in a design review at a fintech company, polishing the expense management flows. It was a routine session where we reviewed the logic behind content and design decisions. While looking over the statuses for submitted expenses, I noticed a label saying âIn approvalâ. I paused, re-read it again, and asked myself: âWhere is it? Are the results in? Where can I find them? Are they sending me to the app section called âApprovalâ?â This tiny label made me question what was happening with my money, and this feeling of uncertainty was quite anxiety-inducing. My team, all native English speakers, did not flinch, even for a second, and moved forward to discuss other parts of the flow. I was the only non-native speaker in the room, and while the label made perfect sense to them, it still felt off to me. After a quick discussion, we landed on âPending approvalâ â the simplest and widely recognised option internationally. More importantly, this wording makes it clear that thereâs an approval process, and it hasnât taken place yet. Thereâs no need to go anywhere to do it. Some might call it nitpicking, but that was exactly the moment I realised how invisible â yet powerful â the non-native speakerâs perspective can be. In a reality where user testing budgets arenât unlimited, designing with familiar language patterns from the start helps you prevent costly confusions in the user journey. Those same confusions often lead to: Higher rate of customer service queries, Lower adoption rates, Higher churn, Distrust and confusion. As A Native Speaker, You Donât See The Whole Picture Global products are often designed with English as their primary language. This seems logical, but hereâs the catch: Roughly 75% of English-speaking users are not native speakers, which means 3 out of every 4 users. Native speakers often write on instinct, which works much like autopilot. This can often lead to overconfidence in content that, in reality, is too culturally specific, vague, or complex. And that content may not be understood by 3 in 4 people who read it. If your team shares the same native language, content clarity remains assumed by default rather than proven through pressure testing. The price for that is the accessibility of your product. A study by National Library of Medicine found that US adults who had proficiency in English but did not use it as their primary language were significantly less likely to be insured, even when provided with the same level of service as everyone else. In other words, they did not finish the process of securing a healthcare provider â a process thatâs vital to their well-being, in part, due to unclear or inaccessible communication. If people abandon the process of getting something as vital as healthcare insurance, itâs easy to imagine them dropping out during checkout, account setup, or app onboarding. Non-native content designers, by contrast, do not write on autopilot. Because of their experience learning English, theyâre much more likely to tune into nuances, complexity, and cultural exclusions that natives often overlook. Thatâs the key to designing for everyone rather than 1 in 4. Non-native Content Designers Make Your UX Global Spotting The Clutter And Cognitive Load Issues When a non-native speaker has to pause, re-read something, or question the meaning of whatâs written, they quickly identify it as a friction point in the user experience. Why itâs important: Every extra second users have to spend understanding your content makes them more likely to abandon the task. This is a high price that companies pay for not prioritising clarity. Cognitive load is not just about complex sentences but also about the speed. Thereâs plenty of research confirming that non-native speakers read more slowly than native speakers. This is especially important when you work on the visibility of system status â time-sensitive content that the user needs to scan and understand quickly. One example you can experience firsthand is an ATM displaying a number of updates and instructions. Even when theyâre quite similar, it still overwhelms you when you realise that you missed one, not being able to finish reading. This kind of rapid-fire updates can increase frustration and the chances of errors. Always Advocating For Plain English They tend to review and rewrite things more often to find the easiest way to communicate the message. What a native speaker may consider clear enough might be dense or difficult for a non-native to understand. Why itâs important: Simple content better scales across countries, languages, and cultures. Catching Culture-specific Assumptions And References When things do not make sense, non-native speakers challenge them. Besides the idioms and other obvious traps, native speakers tend to fall into considering their life experience to be shared with most English-speaking users. Cultural differences might even exist within one globally shared language. Have you tried saying âsoccerâ instead of âfootballâ in a conversation with someone from the UK? These details may not only cause confusion but also upset people. Why itâs important: Making sure your product is free from culture-specific references makes your product more inclusive and safeguards you from alienating your users. They Have Another Level Of Empathy For The Global Audience Being a non-native speaker themselves, they have experience with products that do not speak clearly to them. Theyâve been in the global userâs shoes and know how it impacts the experience. Why itâs important: Empathy is a key driver towards design decisions that take into account the diverse cultural and linguistic background of the users. How Non-native Content Design Can Shape Your Approach To Design Your product wonât become better overnight simply because you read an inspiring article telling you that you need to have a more diverse team. I get it. So here are concrete changes that you can make in your design workflows and hiring routines to make sure your content is accessible globally. Run Copy Reviews With Non-native Readers When you launch a new feature or product, itâs a standard practice to run QA sessions to review visuals and interactions. When your team does not include the non-native perspective, the content is usually overlooked and considered fine as long as itâs grammatically correct. I know, having a dedicated localisation team to pressure-test your content for clarity is a privilege, but you can always start small. At one of my previous companies, we established a âclarity heroes councilâ â a small team of non-native English speakers with diverse cultural and linguistic backgrounds. During our reviews, they often asked questions that surprised us and highlighted where clarity was missing: Whatâs a âgrace periodâ? What will happen when I tap âsettle the paymentâ? These questions flag potential problems and help you save both money and reputation by avoiding thousands of customer service tickets. Review Existing Flows For Clarity Even if your product does not have major releases regularly, it accumulates small changes over time. Theyâre often plugged in as fixes or small improvements, and can be easily overlooked from a QA perspective. A good start will be a regular look at the flows that are critical to your business metrics: onboarding, checkout, and so on. Fence off some time for your team quarterly or even annually, depending on your product size, to come together and check whether your key content pieces serve the global audience well. Usually, a proper review is conducted by a team: a product designer, a content designer, an engineer, a product manager, and a researcher. The idea is to go over the flows, research insights, and customer feedback together. For that, having a non-native speaker on the audit task force will be essential. If youâve never done an audit before, try this template as it covers everything you need to start. Make Sure Your Content Guidelines Are Global-ready If you havenât done it already, make sure your voice & tone documentation includes details about the level of English your company is catering to. This might mean working with the brand team to find ways to make sure your brand voice comes through to all users without sacrificing clarity and comprehension. Use examples and showcase the difference between sounding smart or playful vs sounding clear. Leaning too much towards brand personality is where cultural differences usually shine through. As a user, you mightâve seen it many times. Hereâs a banking app that wanted to seem relaxed and relatable by introducing âDang itâ as the only call-to-action on the screen. However, users with different linguistic backgrounds might not be familiar with this expression. Worse, they might see it as an action, leaving them unsure of what will actually happen after tapping it. Considering how much content is generated with AI today, your guidelines have to account for both tone and clarity. This way, when you feed these requirements to the AI, youâll see the output that will not just be grammatically correct but also easy to understand. Incorporate Global English Heuristics Into Your Definition Of Success Basic heuristic principles are often documented as a part of overarching guidelines to help UX teams do a better job. The Nielsen Norman Group usability heuristics cover the essential ones, but it doesnât mean you shouldnât introduce your own. To complement this list, add this principle: Aim for global understanding: Content and design should communicate clearly to any user regardless of cultural or language background. You can suggest criteria to ensure itâs clear how to evaluate this: Action transparency: Is it clear what happens next when the user proceeds to the next screen or page? Minimal ambiguity: Is the content open to multiple interpretations? International clarity: Does this content work in a non-Western context? Bring A Non-native Perspective To Your Research, Too This one is often overlooked, but collaboration between the research team and non-native speaking writers is super helpful. If your research involves a survey or interview, they can help you double-check whether there is complex or ambiguous language used in the questions unintentionally. In a study by the Journal of Usability Studies, 37% of non-native speakers did not manage to answer the question that included a word they did not recognise or could not recall the meaning of. The question was whether they found the system to be âcumbersome to useâ, and the consequences of getting unreliable data and measurements on this would have a negative impact on the UX of your product. Another study by UX Journal of User Experience highlights how important clarity is in surveys. While most people in their study interpreted the question âHow do you feel about ⊠?â as âWhatâs your opinion on âŠ?â, some took it literally and proceeded to describe their emotions instead. This means that even familiar terms can be misinterpreted. To get precise research results, itâs worth defining key terms and concepts to ensure common understanding with participants. Globalise Your Glossary At Klarna, we often ran into a challenge of inconsistent translation for key terms. A well-defined English term could end up having from three to five different versions in Italian or German. Sometimes, even the same features or app sections could be referred to differently depending on the market â this led to user confusion. To address this, we introduced a shared term base â a controlled vocabulary that included: English term, Definition, Approved translations for all markets, Approved and forbidden synonyms. Importantly, the term selection was dictated by user research, not by assumption or personal preferences of the team. If youâre unsure where to begin, use this product content vocabulary template for Notion. Duplicate it for free and start adding your terms. We used a similar setup. Our new glossary was shared internally across teams, from product to customer service. Results? Reducing the support tickets related to unclear language used in UI (or directions in the user journey) by 18%. This included tasks like finding instructions on how to make a payment (especially with the least popular payment methods like bank transfer), where the late fee details are located, or whether itâs possible to postpone the payment. And yes, all of these features were available, and the team believed they were quite easy to find. A glossary like this can live as an add-on to your guidelines. This way, you will be able to quickly get up to speed new joiners, keep product copy ready for localisation, and defend your decisions with stakeholders. Approach Your Team Growth With An Open Mind âLooking for a native speakerâ still remains a part of the job listing for UX Writers and content designers. Thereâs no point in assuming itâs intentional discrimination. Itâs just a misunderstanding that stems from not fully accepting that our job is more about building the user experience than writing texts that are grammatically correct. Here are a few tips to make sure you hire the best talent and treat your applicants fairly: Remove the ânative speakerâ and âfluencyâ requirement. Instead, focus on the core part of our job: add âclear communicatorâ, âability to simplifyâ, or âexperience writing for a global audienceâ. Judge the work, not the accent. Over the years, there have been plenty of studies confirming that the accent bias is real â people having an unusual or foreign accent are considered less hirable. While some may argue that it can have an impact on the efficiency of internal communications, itâs not enough to justify the reason to overlook the good work of the applicant. My personal experience with the accent is that it mostly depends on the situation youâre in. When Iâm in a friendly environment and do not feel anxiety, my English flows much better as I do not overthink how I sound. Ironically, sometimes when Iâm in a room with my team full of British native speakers, I sometimes default to my Slavic accent. The question is: does it make my content design expertise or writing any worse? Not in the slightest. Therefore, make sure you judge the portfolios, the ideas behind the interview answers, and whiteboard challenge presentations, instead of focusing on whether the candidateâs accent implies that they might not be good writers. Good Global Products Need Great Non-native Content Design Non-native content designers do not have a negative impact on your teamâs writing. They sharpen it by helping you look at your content through the lens of your real user base. In the globalised world, linguistic purity no longer benefits your productâs user experience. Try these practical steps and leverage the non-native speaking lens of your content designers to design better international products.
Tiny Screens, Big Impact: The Forgotten Art Of Developing Web Apps For Feature Phones
Learn why flip phones still matter in 2025, and how you can build and launch web apps for these tiny devices.
Flip phones arenât dead. On the contrary, 200+ million non-smartphones are sold annually. Thatâs roughly equivalent to the number of iPhones sold in 2024. Even in the United States, millions of flip phones are sold each year. As network operators struggle to shut down 2G service, new incentives are offered to encourage device upgrades that further increase demand for budget-friendly flip phones. This is especially true across South Asia and Africa, where an iPhone is unaffordable for the vast majority of the population (it takes two months of work on an average Indian salary to afford the cheapest iPhone). Like their âsmartâ counterparts, flip phones (technically, this category is called âFeature Phonesâ) are becoming increasingly more capable. They now offer features youâd expect from a smartphone, like 4G, WiFi, Bluetooth, and the ability to run apps. If you are targeting users in South Asia and Africa, or niches in Europe and North America, there are flip phone app platforms like Cloud Phone and KaiOS. Building for these platforms is similar to developing a Progressive Web App (PWA), with distribution managed across several app stores. Jargon Busting Flip phones go by many names. Non-smartphones are jokingly called âdumb phonesâ. The technology industry calls this device category âfeature phonesâ. Regionally, they are also known as button phones or basic mobiles in Europe, and keypad mobiles in India. They all share a few traits: they are budget phones with small screens and physical buttons. Why Build Apps For Flip Phones? Itâs a common misconception that people who use flip phones do not want apps. In fact, many first-time internet users are eager to discover new content and services. While this market isnât as lucrative as Appleâs App Store, there are a few reasons why you should build for flip phones. Organic Growth You do not need to pay to acquire flip phone users. Unlike Android or IOS, where the cost per install (CPI) averages around $2.5-3.3 per install according to GoGoChart, flip phone apps generate substantial organic downloads. Brand Introduction When flip phone users eventually upgrade to smartphones, they will search for the apps they are already familiar with. This will, in turn, generate more installs on the Google Play Store and, to a lesser extent, the Apple App Store. Low Competition There are ~1,700 KaiOS apps and fewer Cloud Phone widgets. Meanwhile, Google Play has over 1.55 million Android apps to choose from. It is much easier to stand out as one in a thousand than one in a million. Technical Foundations Flip phones could not always run apps. It wasnât until the Ovi Store (later renamed to the âNokia Storeâ) launched in 2009, a year after Appleâs flagship iPhone launched, that flip phones got installable, third-party applications. At the time, apps were written for the fragmented Java 2 Mobile Edition (J2ME) runtime, available only on select Nokia models, and often required integration with poorly-documented, proprietary packages like the Nokia UI API. Today, flip phone platforms have rejected native runtimes in favor of standard web technologies in an effort to reduce barriers to entry and attract a wider pool of software developers. Apps running on modern flip phones are primarily written in languages many developers are familiar with â HTML, CSS, and JavaScript â and with them, a set of trade-offs and considerations. Hardware Flip phones are affordable because they use low-end, often outdated, hardware. On the bottom end are budget phones with a real-time operating system (RTOS) running on chips like the Unisoc T107 with as little as 16MB of RAM. These phones typically support Opera Mini and Cloud Phone. At the upper end is the recently-released TCL Flip 4 running KaiOS 4.0 on the Qualcomm Snapdragon 4s with 1GB of RAM. While it is difficult to accurately compare such different hardware, Appleâs latest iPhone 16 Pro has 500x more memory (8GB RAM) and supports download speeds up to 1,000x faster than a low-end flip phone (4G LTE CAT-1). Performance You might think that flip phone apps are easily limited by the scarce available resources of budget hardware. This is the case for KaiOS, since apps are executed on the device. Code needs to be minified, thumbnails downsized, and performance evaluated across a range of real devices. You cannot simply test on your desktop with a small viewport. However, as remote browsers, both Cloud Phone and Opera Mini overcome hardware constraints by offloading computationally expensive rendering to servers. This means performance is generally comparable to modern desktops, but can lead to a few quirky and, at times, unintuitive characteristics. For instance, if your app fetches a 1MB file to display a data table, this does not consume 1MB of the userâs mobile data. Only changes to the screen contents get streamed to the user, consuming bandwidth. On the other hand, data is consumed by complex animations and page transitions, because each frame is at least a partial screen refresh. Despite this quirk, Opera Mini estimates it saves up to 90% of data compared to conventional browsers. Security Do not store sensitive data in browser storage. This holds true for flip phones, where the security concerns are similar to those of traditional web browsers. Although apps cannot generally access data from other apps, KaiOS does not encrypt client-side data. The implications are different for remote browsers. Opera Mini does not support client-side storage at all, while Cloud Phone stores data encrypted in its data centers and not on the userâs phone. Design For Modern Flip Phones Simplify, Donât Shrink-to-fit Despite their staying power, these devices go largely ignored by nearly every web development framework and library. Popular front-end web frameworks like Bootstrap v5 categorize all screens below 576px as extra small. Another popular choice, Tailwind, sets the smallest CSS breakpoint â a specific width where the layout changes to accommodate an optimal viewing experience across different devices â even higher at 40em (640px). Design industry experts like Norman Nielsen suggest the smallest breakpoint, âis intended for mobile and generally is up to 500px.â Standards like these advocate for a one-size-fits-all approach on small screens, but some small design changes can make a big difference for new internet users. Small screens vary considerably in size, resolution, contrast, and brightness. Small screen usability requires distinct design considerations â not a shrink-to-fit model. While all of these devices have a screen width smaller than the smallest common breakpoints, treating them equally would be a mistake. Most websites render too large for flip phones. They use fonts that are too big, graphics that are too detailed, and sticky headers that occupy a quarter of the screen. To make matters worse, many websites disable horizontal scrolling by hiding content that overflows horizontally. This allows for smooth scrolling on a touchscreen, but also makes it impossible to read text that extends beyond the viewport on flip phones. The table below includes physical display size, resolution, and examples to better understand the diversity of small screens across flip phones and budget smartphones. Resolution Display Size Pixel Size Example QQVGA 1.8â 128Ă160 Viettel Sumo 4G V1 QVGA 2.4â 240Ă320 Nokia 235 4G QVGA (Square) 2.4â 240Ă240 Frog Pocket2 HVGA (480p) 2.8-3.5â 320Ă480 BlackBerry 9720 VGA 2.8-3.5â 480Ă640 Cat S22 WVGA 2.8-3.5â 480Ă800 HP Pre 3 FWVGA+ 5â 480Ă960 Alcatel 1 Note: Flip phones have small screens typically between 1.8ââ2.8â with a resolution of 240x320 (QVGA) or 128x160 (QQVGA). For comparison, an Apple Watch Series 10 has a 1.8â screen with a resolution of 416x496. By modern standards, flip phone displays are small with low resolution, pixel density, contrast, and brightness. Develop For Small Screens Add custom, named breakpoints to your frameworkâs defaults, rather than manually using media queries to override layout dimensions defined by classes. Bootstrap v5 Bootstrap defines a map, $grid-breakpoints, in the _variables.scss Sass file that contains the default breakpoints from SM (576px) to XXL (1400px). Use the map-merge() function to extend the default and add your own breakpoint. @import "node_modules/bootstrap/scss/functions"; $grid-breakpoints: map-merge($grid-breakpoints, ("xs": 320px)); Tailwind v4 Tailwind allows you to extend the default theme in the tailwind.config.js configuration file. Use the extend key to define new breakpoints. const defaultTheme = require('tailwindcss/defaultTheme') module.exports = { theme: { extend: { screens: { "xs": "320px", ...defaultTheme.screens, }, }, }, }; The Key(board) To Success Successful flip phone apps support keyboard navigation using the directional pad (D-pad). This is the same navigation pattern as TV remotes: four arrow keys (up, down, left, right) and the central button. To build a great flip phone-optimized app, provide a navigation scheme where the user can quickly learn how to navigate your app using these limited controls. Ensure users can navigate to all visible controls on the screen. Navigating PodLP using d-pad (left) and a virtual cursor (right). Although some flip phone platforms support spatial navigation using an emulated cursor, it is not universally available and creates a worse user experience. Moreover, while apps that support keyboard navigation will work with an emulated cursor, this isnât necessarily true the other way around. Opera Mini Native only offers a virtual cursor, Cloud Phone only offers spatial navigation, and KaiOS supports both. If you develop with keyboard accessibility in mind, supporting flip phone navigation is easy. As general guidelines, never remove a focus outline. Instead, override default styles and use box shadows to match your appâs color scheme while fitting appropriately. Autofocus on the first item in a sequence â list or grid â but be careful to avoid keyboard traps. Finally, make sure that the lists scroll the newly-focused item completely into view. Donât Make Users Type If you have ever been frustrated typing a long message on your smartphone, only to have it accidentally erased, now imagine that frustration when you typed the message using T9 on a flip phone. Despite advancements in predictive typing, itâs a chore to fill forms and compose even a single 180-character Tweet with just nine keys. Whatever you do, donât make flip phone users type! Fortunately, it is easy to adapt designs to require less typing. Prefer numbers whenever possible. Allow users to register using their phone number (which is easy to type), send a PIN code or one-time password (OTPs) that contains only numbers, and look up address details from a postal code. Each of these saves tremendous time and avoids frustration that often leads to user attrition. Alternatively, integrate with single-sign-on (SSO) providers to âLog in with Google,â so users do not have to retype passwords that security teams require to be at least eight characters long and contain a letter, number, and symbol. Just keep in mind that many new internet users wonât have an email address. They may not know how to access it, or their phone might not be able to access emails. Finally, allow users to search by voice when it is available. As difficult as it is typing English using T9, itâs much harder typing a language like Tamil, which has over 90M speakers across South India and Sri Lanka. Despite decades of advancement, technologies like auto-complete and predictive typing are seldom available for such languages. While imperfect, there are AI models like Whisper Tamil that can perform speech-to-text, thanks to researchers at universities like the Speech Lab at IIT Madras. Flip Phone Browsers And Operating Systems Another challenge with developing web apps for flip phones is their fragmented ecosystem. Various companies have used different approaches to allow websites and apps to run on limited hardware. There are at least three major web-based platforms that all operate differently: Cloud Phone is the most recent solution, launched in December 2023, using a modern Puffin (Chromium) based remote browser that serves as an app store. KaiOS, launched in 2016 using Firefox OS as its foundation, is a mobile operating system where the entire system is a web browser. Opera Mini Native is by far the oldest, launched in 2005 as an ad-supported remote browser that still uses the decade-old, discontinued Presto engine). Although both platforms are remote browsers, there are significant differences between Cloud Phone and Opera Mini that are not immediately apparent. Platform Cons Pros Cloud Phone Missing features like WebPush No offline support Monetization not provided Modern Chromium v128+ engine Rich multimedia support No optimizations needed Actively developed 100+ models launched in 2024 KaiOS Outdated Gecko engine Hardware constrained Few models released in 2024 KaiAds integration required Two app stores Full offline support APIs for low-level integration Apps can be packaged or hosted Opera Mini Native Discontinued Presto engine ~2.5s async execution limit Limited ES5 support No multimedia support No app store Last updated in 2020 Preinstalled on hundreds of millions of phones Partial offline support Stable, cross-platform Flip phones have come a long way, but each platform supports different capabilities. You may need to remove or scale back features based on what is supported. It is best to target the lowest common denominator that is feasible for your application. For information-heavy news websites, wikis, or blogs, Opera Miniâs outdated technology works well enough. For video streaming services, both Cloud Phone and KaiOS work well. Conversely, remote browsers like Opera Mini and Cloud Phone cannot handle high frame rates, so only KaiOS is suitable for real-time interactive games. Just like with design, there is no one-size-fits-all approach to flip phone development. Even though all platforms are web-based, they require different tradeoffs. Tiny Screens, Big Impact The flip phone market is growing, particularly for 4G-enabled models. Relianceâs JioPhone is among the most successful models, selling more than 135 million units of its flagship KaiOS-enabled phone. The company plans to increase 4G flip phone rollout steadily as it migrates Indiaâs 250 million 2G users to 4G and 5G. Similar campaigns are underway across emerging markets, like Vodacomâs $14 Mobicel S4, a Cloud phone-enabled device in South Africa, and Viettelâs gifting 700,000 4G flip phones to current 2G subscribers to upgrade users in remote and rural areas. Estimates of the total active flip phone market size are difficult to come by, and harder still to find a breakdown by platform. KaiOS claims to enable âover 160 million phones worldwide,â while âover 300 million people use Opera Mini to stay connected.â Just a year after launch, Cloud Phone states that, âone million Cloud Phone users already access the service from 90 countries.â By most estimates, there are already hundreds of millions of web-enabled flip phone users eager to discover new products and services. Conclusion Hundreds of millions still rely on flip phones to stay connected. Yet, these users go largely ignored even by products that target emerging markets. Modern software development often prioritizes the latest and greatest over finding ways to affordably serve more than 2.6 billion unconnected people. If you are not designing for small screens using keyboard navigation, youâre shutting out an entire population from accessing your service. Flip phones still matter in 2025. With ongoing network transitions, millions will upgrade, and millions more will connect for the first time using 4G flip phones. This creates an opportunity to put your app into the hands of the newly connected. And thanks to modern remote browser technology, it is now easier than ever to build and launch your app on flip phones without costly and time-consuming optimizations to function on low-end hardware.
Design Patterns For AI Interfaces
Designing a new AI feature? Where do you even begin? Hereâs a simple, practical overview with useful design patterns for better AI experiences.
So you need to design a new AI feature for your product. How would you start? How do you design flows and interactions? And how do you ensure that that new feature doesnât get abandoned by users after a few runs? In this article, Iâd love to share a very simple but systematic approach to how I think about designing AI experiences. Hopefully, it will help you get a bit more clarity about how to get started. This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns đŁ â with live UX training coming up soon. Jump to table of contents. The Receding Role of AI Chat One of the key recent shifts is a slow move away from traditional âchat-alikeâ AI interfaces. As Luke Wroblewski wrote, when agents can use multiple tools, call other agents and run in the background, users orchestrate AI work more â thereâs a lot less chatting back and forth. In fact, chatbots are rarely a great experience paradigm â mostly because the burden of articulating intent efficiently lies on the user. But in practice, itâs remarkably difficult to do well and very time-consuming. Chat doesnât go away, of course, but itâs being complemented with task-oriented UIs â temperature controls, knobs, sliders, buttons, semantic spreadsheets, infinite canvases â with AI providing predefined options, presets, and templates. There, AI emphasizes the work, the plan, the tasks â the outcome, instead of the chat input. The results are experiences that truly amplify value for users by sprinkling a bit of AI in places where it delivers real value to real users. To design better AI experiences, we need to study 5 key areas that we need to shape. Input UX: Expressing Intent Conversational AI is a very slow way of helping users express and articulate their intent. Usability tests show that users often get lost in editing, reviewing, typing, and re-typing. Itâs painfully slow, often taking 30-60 seconds for input. As it turns out, people have a hard time expressing their intent well. In fact, instead of writing prompts manually, it's a good idea to ask AI to write a prompt to feed itself. With Flora AI, users can still write prompts, but they visualize their intent with nodes by connecting various sources visually. Instead of elaborately explaining to AI how we need the pipeline to work, we attach nodes and commands on a canvas. With input for AI, being precise is slow and challenging. Instead, we can abstract away the object we want to manipulate, and give AI precise input by moving that abstracted object on a canvas. Thatâs what Krea.ai does. In summary, we can minimize the burden of typing prompts manually â with AI-generated pre-prompts, prompt extensions, query builders, and also voice input. Output UX: Displaying Outcomes AI output doesn't have to be merely plain text or a list of bullet points. It must be helpful to drive people to insights, faster. For example, we could visualize output by creating additional explanations based on the userâs goal and motivations. For example, Amelia Wattenberger visualized AI output for her text editor PenPal by adding style lenses to explore the content from. The output could be visualized in sentence lengths and scales Sad â Happy, Concrete â Abstract, and so on. The outcome could also be visualized on a map, which, of course, is expected for an AI GIS analyst. Also, users can access individual data layers, turn them on and off, and hence explore the data on the map. We can also use forced ranking and prioritizations to suggest best options and avoid choice paralysis â even if a user asks for top 10 recommendations. We can think about ways to present results as a data table, or a dashboard, or a visualization on a map, or as a structured JSON file, for example. Refinement UX: Tweaking Output Users often need to cherry-pick some bits from the AI output and bring them together in a new place â and often they need to expand on one section, synthesize bits from another section, or just refine the outcome to meet their needs. Refinement is usually the most painful part of the experience, with many fine details being left to users to explain elaborately. But we can use good old-fashioned UI controls like knobs, sliders, buttons, and so on to improve that experience, similar to how Adobe Firefly does it (image above). We can also use presets, bookmarks, and allow users to highlight specific parts of the outcome that theyâd like to change â with contextual prompts acting on highlighted parts of the output, rather than global prompts. AI Actions: Tasks To Complete With AI agents, we can now also allow users to initiate tasks that AI can perform on their behalf, such as scheduling events, planning, and deep research. We could also ask to sort results or filter them in a specific way. But we can also add features to help users use AI output better â e.g., by visualizing it, making it shareable, allowing transformations between formats, or also posting to Slack, Jira, and so on. AI Integration: Where Work Happens Many AI interactions are locked within a specific product, but good AI experiences happen where the actual work happens. It would be quite unusual to expect a dedicated section for Autocomplete, for example, but we do so for AI features. The actual boost in productivity comes when users rely on AI as a co-pilot or little helper in the tools they use daily for work. Itâs seamless integrations into Slack, Teams, Jira, GitHub, and so on â the tools that people use anyway. Dia Browser and Dovetail are great examples of it in action. Wrapping Up Along these five areas, we can explore ways to minimize the cost of interaction with a textbox, and allow users to interact with the points of interest directly, by tapping, clicking, selecting, highlighting, and bookmarking. Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value. And AI products donât have to be AI-only. There is a lot of value in mapping into the mental models that people have adopted over the years, and enhance them with AI, similar to how we do it with browsersâ autofill, rather than leaving users in front of a frightening and omnipresent text box. Useful Resources Where Should AI Sit In Your UI?, by Sharang Sharma Shape of AI: Design Patterns, by Emily Campbell AI UX Patterns, by Luke Bennis Design Patterns For Trust With AI, via Sarah Gold AI Guidebook Design Patterns, by Google Usable Chat Interfaces to AI Models, by Luke Wroblewski The Receding Role of AI Chat, by Luke Wroblewski Agent Management Interface Patterns, by Luke Wroblewski Designing for AI Engineers, by Eve Weinberg Meet âSmart Interface Design Patternsâ You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects â with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables â with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off. Meet Smart Interface Design Patterns, our video course on interface design & UX. Video + UX Training Video only Video + UX Training $ 495.00 $ 699.00 Get Video + UX Training 25 video lessons (15h) + Live UX Training. 100 days money-back-guarantee. Video only $ 300.00$ 395.00 Get the video course 40 video lessons (15h). Updated yearly. Also available as a UX Bundle with 2 video courses.
Unmasking The Magic: The Wizard Of Oz Method For UX Research
The Wizard of Oz method is a proven UX research tool that simulates real interactions to uncover authentic user behavior. Victor Yocco unpacks the core principles of the WOZ method, explores advanced real-world applications, and highlights its unique value, including its relevance in the emerging field of agentic AI.
New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the most ingenious ideas risk failure without a fundamental grasp of user interaction with these new experiences. Consider the plight of the Nintendo Power Glove. Despite being a commercial success (selling over 1 million units), its release in late 1989 was followed by its discontinuation less than a full year later in 1990. The two games created solely for the Power Glove sold poorly, and there was little use for the Glove with Nintendoâs already popular traditional console games. A large part of the failure was due to audience reaction once the product (which allegedly was developed in 8 weeks) was cumbersome and unintuitive. Users found syncing the glove to the moves in specific games to be extremely frustrating, as it required a process of coding the moves into the gloveâs preset move buttons and then remembering which buttons would generate which move. With the more modern success of Nintendoâs WII and other movement-based controller consoles and games, we can see the Power Glove was a concept ahead of its time. If Power Gloveâs developers wanted to conduct effective research prior to building it out, they would have needed to look beyond traditional methods, such as surveys and interviews, to understand how a user might truly interact with the Glove. How could this have been done without a functional prototype and slowing down the overall development process? Enter the Wizard of Oz method, a potent tool for bridging the chasm between abstract concepts and tangible user understanding, as one potential option. This technique simulates a fully functional system, yet a human operator (âthe Wizardâ) discreetly orchestrates the experience. This allows researchers to gather authentic user reactions and insights without the prerequisite of a fully built product. The Wizard of Oz (WOZ) method is named in tribute to the similarly named book by Frank L. Baum. In the book, the Wizard is simply a man hidden behind a curtain, manipulating the reality of those who travel the land of Oz. Dorothy, the protagonist, exposes the Wizard for what he is, essentially an illusion or a con who is deceiving those who believe him to be omnipotent. Similarly, WOZ takes technologies that may or may not currently exist and emulates them in a way that should convince a research participant they are using an existing system or tool. WOZ enables the exploration of user needs, validation of nascent concepts, and mitigation of development risks, particularly with complex or emerging technologies. The product team in our above example might have used this method to have users simulate the actions of wearing the glove, programming moves into the glove, and playing games without needing a fully functional system. This could have uncovered the illogical situation of asking laypeople to code their hardware to be responsive to a game, show the frustration one encounters when needing to recode the device when changing out games, and also the cumbersome layout of the controls on the physical device (even if theyâd used a cardboard glove with simulated controls drawn in crayon on the appropriate locations. Jeff Kelley credits himself (PDF) with coining the term WOZ method in 1980 to describe the research method he employed in his dissertation. However, Paula Roe credits Don Norman and Allan Munro for using the method as early as 1973 to conduct testing on an airport automated travel assistant. Regardless of who originated the method, both parties agree that it gained prominence when IBM later used it to conduct studies on a speech-to-text tool known as The Listening Typewriter (see Image below). In this article, Iâll cover the core principles of the WOZ method, explore advanced applications taken from practical experience, and demonstrate its unique value through real-world examples, including its application to the field of agentic AI. UX practitioners can use the WOZ method as another tool to unlock user insights and craft human-centered products and experiences. The Yellow Brick Road: Core Principles And Mechanics The WOZ method operates on the premise that users believe they are interacting with an autonomous system while a human wizard manages the systemâs responses behind the scenes. This individual, often positioned remotely (or off-screen), interprets user inputs and generates outputs that mimic the anticipated functionality of the experience. Cast Of Characters A successful WOZ study involves several key roles: The User The participant who engages with what they perceive as the functional system. The Facilitator The researcher who guides the user through predefined tasks and observes their behavior and reactions. The Wizard The individual manipulates the systemâs behavior in real-time, providing responses to user inputs. The Observer (Optional) An additional researcher who observes the session without direct interaction, allowing for a secondary perspective on user behavior. Setting The Stage For Believability: Leaving Kansas Behind Creating a convincing illusion is key to the success of a WOZ study. This necessitates careful planning of the research environment and the tasks users will undertake. Consider a study evaluating a new voice command system for smart home devices. The research setup might involve a physical mock-up of a smart speaker and predefined scenarios like âPlay my favorite musicâ or âDim the living room lights.â The wizard, listening remotely, would then trigger the appropriate responses (e.g., playing a song, verbally confirming the lights are dimmed). Or perhaps it is a screen-based experience testing a new AI-powered chatbot. You have users entering commands into a text box, with another member of the product team providing responses simultaneously using a tool like Figma/Figjam, Miro, Mural, or other cloud-based software that allows multiple users to collaborate simultaneously (the author has no affiliation with any of the mentioned products). The Art Of Illusion Maintaining the illusion of a genuine system requires the following: Timely and Natural Responses The wizard must react to user inputs with minimal delay and in a manner consistent with expected system behavior. Hesitation or unnatural phrasing can break the illusion. Consistent System Logic Responses should adhere to a predefined logic. For instance, if a user asks for the weather in a specific city, the wizard should consistently provide accurate information. Handling the Unexpected Users will inevitably deviate from planned paths. The wizard must possess the adaptability to respond plausibly to unforeseen inputs while preserving the perceived functionality. Ethical Considerations Transparency is crucial, even in a method that involves a degree of deception. Participants should always be debriefed after the session, with a clear explanation of the Wizard of Oz technique and the reasons for its use. Data privacy must be maintained as with any study, and participants should feel comfortable and respected throughout the process. Distinguishing The Method The WOZ method occupies a unique space within the UX research toolkit: Unlike usability testing, which evaluates existing interfaces, Wizard of Oz explores concepts before significant development. Distinct from A/B testing, which compares variations of a productâs design, WOZ assesses entirely new functionalities that might otherwise lack context if shown to users. Compared to traditional prototyping, which often involves static mockups, WOZ offers a dynamic and interactive experience, enabling observation of real-time user behavior with a simulated system. This method proves particularly valuable when exploring truly novel interactions or complex systems where building a fully functional prototype is premature or resource-intensive. It allows researchers to answer fundamental questions about user needs and expectations before committing significant development efforts. Letâs move beyond the foundational aspects of the WOZ method and explore some more advanced techniques and critical considerations that can elevate its effectiveness. Time Savings: WOZ Versus Crude Prototyping Itâs a fair question to ask whether WOZ is truly a time-saver compared to even cruder prototyping methods like paper prototypes or static digital mockups. While paper prototypes are incredibly fast to create and test for basic flow and layout, they fundamentally lack dynamic responsiveness. Static mockups offer visual fidelity but cannot simulate complex interactions or personalized outputs. The true time-saving advantage of the WOZ emerges when testing novel, complex, or AI-driven concepts. It allows researchers to evaluate genuine user interactions and mental models in a seemingly live environment, collecting rich behavioral data that simpler prototypes cannot. This fidelity in simulating a dynamic experience, even with a human behind the curtain, often reveals critical usability or conceptual flaws far earlier and more comprehensively than purely static representations, ultimately preventing costly reworks down the development pipeline. Additional Techniques And Considerations While the core principle of the WOZ method is straightforward, its true power lies in nuanced application and thoughtful execution. Seasoned practitioners may leverage several advanced techniques to extract richer insights and address more complex research questions. Iterative Wizardry The WOZ method isnât necessarily a one-off endeavor. Employing it in iterative cycles can yield significant benefits. Initial rounds might focus on broad concept validation and identifying fundamental user reactions. Subsequent iterations can then refine the simulated functionality based on previous findings. For instance, after an initial study reveals user confusion with a particular interaction flow, the simulation can be adjusted, and a follow-up study can assess the impact of those changes. This iterative approach allows for a more agile and user-centered exploration of complex experiences. Managing Complexity Simulating complex systems can be difficult for one wizard. Breaking complex interactions into smaller, manageable steps is crucial. Consider researching a multi-step onboarding process for a new software application. Instead of one person trying to simulate the entire flow, different aspects could be handled sequentially or even by multiple team members coordinating their responses. Clear communication protocols and well-defined responsibilities are essential in such scenarios to maintain a seamless user experience. Measuring Success Beyond Observation While qualitative observation is a cornerstone of the WOZ method, defining clear metrics can add a layer of rigor to the findings. These metrics should match research goals. For example, if the goal is to assess the intuitiveness of a new navigation pattern, you might track the number of times users express confusion or the time it takes them to complete specific tasks. Combining these quantitative measures with qualitative insights provides a more comprehensive understanding of the user experience. Integrating With Other Methods The WOZ method isnât an island. Its effectiveness can be amplified by integrating it with other research techniques. Preceding a WOZ study with user interviews can help establish a deeper understanding of user needs and mental models, informing the design of the simulated experience. Following a WOZ study, surveys can gather broader quantitative feedback on the concepts explored. For example, after observing users interact with a simulated AI-powered scheduling tool, a survey could gauge their overall trust and perceived usefulness of such a system. When Not To Use WOZ WOZ, as with all methods, has limitations. A few examples of scenarios where other methods would likely yield more reliable findings would be: Detailed Usability Testing Humans acting as wizards cannot perfectly replicate the exact experience a user will encounter. WOZ is often best in the early stages, where prototypes are rough drafts, and your team is looking for guidance on a solution that is up for consideration. Testing on a more detailed wireframe or prototype would be preferable to WOZ when you have entered the detailed design phase. Evaluating extremely complex systems with unpredictable outputs If the systemâs responses are extremely varied, require sophisticated real-time calculations that exceed human capacity, or are intended to be genuinely unpredictable, a human may struggle to simulate them convincingly and consistently. This can lead to fatigue, errors, or improvisations that donât reflect the intended system, thereby compromising the validity of the findings. Training And Preparedness The wizardâs skill is critical to the methodâs success. Training the individual(s) who will be simulating the system is essential. This training should cover: Understanding the Research Goals The wizard needs to grasp what the research aims to uncover. Consistency in Responses Maintaining consistent behavior throughout the sessions is vital for user believability. Anticipating User Actions While improvisation is sometimes necessary, the wizard should be prepared for common user paths and potential deviations. Remaining Unbiased The wizard must avoid leading users or injecting their own opinions into the simulation. Handling Unexpected Inputs Clear protocols for dealing with unforeseen user actions should be established. This might involve having a set of pre-prepared fallback responses or a mechanism for quickly consulting with the facilitator. All of this suggests the need for practice in advance of running the actual session. We shouldnât forget to have a number of dry runs in which we ask our colleagues or those who are willing to assist to not only participate but also think about possible responses that could stump the wizard or throw things off if the user might provide them during a live session. I suggest having a believable prepared error statement ready to go for when a user throws a curveball. A simple response from the wizard of âIâm sorry, I am unable to perform that task at this timeâ might be enough to move the session forward while also capturing a potentially unexpected situation your team can address in the final product design. Was This All A Dream? The Art Of The Debrief The debriefing session following the WOZ interaction is an additional opportunity to gather rich qualitative data. Beyond asking âWhat did you think?â effective debriefing involves sharing the purpose of the study and the fact that the experience was simulated. Researchers should then conduct psychological probing to understand the reasons behind user behavior and reactions. Asking open-ended questions like âWhy did you try that?â or âWhat were you expecting to happen when you clicked that button?â can reveal valuable insights into user mental models and expectations. Exploring moments of confusion, frustration, or delight in detail can uncover key areas for design improvement. Think about the potential information the Power Glovesâ development team could have uncovered if theyâd asked participants what the experience of programming the glove and trying to remember what theyâd programmed into which set of keys had been. Case Studies: Real-World Applications The value of the WOZ method becomes apparent when examining its application in real-world research scenarios. Here is an in-depth review of one scenario and a quick summary of another study involving WOZ, where this technique proved invaluable in shaping user experiences. Unraveling Agentic AI: Understanding User Mental Models A significant challenge in the realm of emerging technologies lies in user comprehension. This was particularly evident when our team began exploring the potential of Agentic AI for enterprise HR software. Agentic AI refers to artificial intelligence systems that can autonomously pursue goals by making decisions, taking actions, and adapting to changing environments with minimal human intervention. Unlike generative AI that primarily responds to direct commands or generates content, Agentic AI is designed to understand user intent, independently plan and execute multi-step tasks, and learn from its interactions to improve performance over time. These systems often combine multiple AI models and can reason through complex problems. For designers, this signifies a shift towards creating experiences where AI acts more like a proactive collaborator or assistant, capable of anticipating needs and taking the initiative to help users achieve their objectives rather than solely relying on explicit user instructions for every step. Preliminary research, including surveys and initial interviews, suggested that many HR professionals, while intrigued by the concept of AI assistance, struggled to grasp the potential functionality and practical implications of truly agentic systems â those capable of autonomous action and proactive decision-making. We saw they had no reference point for what agentic AI was, even after we attempted relevant analogies to current examples. Building a fully functional agentic AI prototype at this exploratory stage was impractical. The underlying algorithms and integrations were complex and time-consuming to develop. Moreover, we risked building a solution based on potentially flawed assumptions about user needs and understanding. The WOZ method offered a solution. Setup We designed a scenario where HR employees interacted with what they believed was an intelligent AI assistant capable of autonomously handling certain tasks. The facilitator presented users with a web interface where they could request assistance with tasks like âdraft a personalized onboarding plan for a new marketing hireâ or âidentify employees who might benefit from proactive well-being resources based on recent activity.â Behind the scenes, a designer acted as the wizard. Based on the userâs request and the (simulated) available data, the designer would craft a response that mimicked the output of an agentic AI. For the onboarding plan, this involved assembling pre-written templates and personalizing them with details provided by the user. For the well-being resource identification, the wizard would select a plausible list of employees based on the general indicators discussed in the scenario. Crucially, the facilitator encouraged users to interact naturally, asking follow-up questions and exploring the systemâs perceived capabilities. For instance, a user might ask, âCan the system also schedule the initial team introductions?â The wizard, guided by pre-defined rules and the overall research goals, would respond accordingly, perhaps with a âYes, I can automatically propose meeting times based on everyoneâs calendarsâ (again, simulated). As recommended, we debriefed participants following each session. We began with transparency, explaining the simulation and that we had another live human posting the responses to the queries based on what the participant was saying. Open-ended questions explored initial reactions and envisioned use. Task-specific probing, like âWhy did you expect that?â revealed underlying assumptions. We specifically addressed trust and control (âHow much trust...? What level of control...?â). To understand mental models, we asked how users thought the âAIâ worked. We also solicited improvement suggestions (âWhat features...?â). By focusing on the âwhyâ behind user actions and expectations, these debriefings provided rich qualitative data that directly informed subsequent design decisions, particularly around transparency, human oversight, and prioritizing specific, high-value use cases. We also had a research participant who understood agentic AI and could provide additional insight based on that understanding. Key Insights This WOZ study yielded several crucial insights into user mental models of agentic AI in an HR context: Overestimation of Capabilities Some users initially attributed near-magical abilities to the âAIâ, expecting it to understand highly nuanced or ambiguous requests without explicit instruction. This highlighted the need for clear communication about the systemâs actual scope and limitations. Trust and Control A significant theme revolved around trust and control. Users expressed both excitement about the potential time savings and anxiety about relinquishing control over important HR processes. This indicated a need for design solutions that offered transparency into the AIâs decision-making and allowed for human oversight. Value in Proactive Assistance Users reacted positively to the AI proactively identifying potential issues (like burnout risk), but they emphasized the importance of the AI providing clear reasoning and allowing human HR professionals to review and approve any suggested actions. Need for Tangible Examples Abstract explanations of agentic AI were insufficient. Users gained a much clearer understanding through these simulated interactions with concrete tasks and outcomes. Resulting Design Changes Based on these findings, we made several key design decisions: Emphasis on Transparency The user interface would need to clearly show the AIâs reasoning and the data it used to make decisions. Human Oversight and Review Built-in approval workflows would be essential for critical actions, ensuring HR professionals retain control. Focus on Specific, High-Value Use Cases Instead of trying to build a general-purpose agent, we prioritized specific use cases where agentic capabilities offered clear and demonstrable benefits. Educational Onboarding The product onboarding would include clear, tangible examples of the AIâs capabilities in action. Exploring Voice Interaction for In-Car Systems In another project, we used the WOZ method to evaluate user interaction with a voice interface for controlling in-car functions. Our research question focused on the naturalness and efficiency of voice commands for tasks like adjusting climate control, navigating to points of interest, and managing media playback. We set up a car cabin simulator with a microphone and speakers. The wizard, located in an adjacent room, listened to the userâs voice commands and triggered the corresponding actions (simulated through visual changes on a display and audio feedback). This allowed us to identify ambiguous commands, areas of user frustration with voice recognition (even though it was human-powered), and preferences for different phrasing and interaction styles before investing in complex speech recognition technology. These examples illustrate the versatility and power of the method in addressing a wide range of UX research questions across diverse product types and technological complexities. By simulating functionality, we can gain invaluable insights into user behavior and expectations early in the design process, leading to more user-centered and ultimately more successful products. The Future of Wizardry: Adapting To Emerging Technologies The WOZ method, far from being a relic of simpler technological times, retains relevance as we navigate increasingly sophisticated and often opaque emerging technologies. The WOZ methodâs core strength, the ability to simulate complex functionality with human ingenuity, makes it uniquely suited for exploring user interactions with systems that are still in their nascent stages. WOZ In The Age Of AI Consider the burgeoning field of AI-powered experiences. Researching user interaction with generative AI, for instance, can be effectively done through WOZ. A wizard could curate and present AI-generated content (text, images, code) in response to user prompts, allowing researchers to assess user perceptions of quality, relevance, and trust without needing a fully trained and integrated AI model. Similarly, for personalized recommendation systems, a human could simulate the recommendations based on a userâs stated preferences and observed behavior, gathering valuable feedback on the perceived accuracy and helpfulness of such suggestions before algorithmic development. Even autonomous systems, seemingly the antithesis of human control, can benefit from WOZ studies. By simulating the autonomous behavior in specific scenarios, researchers can explore user comfort levels, identify needs for explainability, and understand how users might want to interact with or override such systems. Virtual And Augmented Reality Immersive environments like virtual and augmented reality present new frontiers for user experience research. WOZ can be particularly powerful here. Imagine testing a novel gesture-based interaction in VR. A researcher tracking the userâs hand movements could trigger corresponding virtual events, allowing for rapid iteration on the intuitiveness and comfort of these interactions without the complexities of fully programmed VR controls. Similarly, in AR, a wizard could remotely trigger the appearance and behavior of virtual objects overlaid onto the real world, gathering user feedback on their placement, relevance, and integration with the physical environment. The Human Factor Remains Central Despite the rapid advancements in artificial intelligence and immersive technologies, the fundamental principles of human-centered design remain as relevant as ever. Technology should serve human needs and enhance human capabilities. The WOZ method inherently focuses on understanding user reactions and behaviors and acts as a crucial anchor in ensuring that technological progress aligns with human values and expectations. It allows us to inject the âhuman factorâ into the design process of even the most advanced technologies. Doing this may help ensure these innovations are not only technically feasible but also truly usable, desirable, and beneficial. Conclusion The WOZ method stands as a powerful and versatile tool in the UX researcherâs toolkit. The WOZ methodâs ability to bypass limitations of early-stage development and directly elicit user feedback on conceptual experiences offers invaluable advantages. Weâve explored its core mechanics and covered ways of maximizing its impact. Weâve also examined its practical application through real-world case studies, including its crucial role in understanding user interaction with nascent technologies like agentic AI. The strategic implementation of the WOZ method provides a potent means of de-risking product development. By validating assumptions, uncovering unexpected user behaviors, and identifying potential usability challenges early on, teams can avoid costly rework and build products that truly resonate with their intended audience. I encourage all UX practitioners, digital product managers, and those who collaborate with research teams to consider incorporating the WOZ method into their research toolkit. Experiment with its application in diverse scenarios, adapt its techniques to your specific needs and donât be afraid to have fun with it. Scarecrow costume optional.
Droip: The Modern Website Builder WordPress Needed
Traditional page builders have shaped how we build WordPress sites for years. Letâs take a closer look at [Droip](https://droip.com/), a modern, no-code visual builder, and explore how it redefines the experience with cleaner performance, full design freedom, and zero plugin dependency.
This article is a sponsored by Droip Traditional WordPress page builders had their moment. Builders like Elementor, Divi, and Oxygen have been around for years. So long, in fact, that many of us just accepted their limitations as the cost of using WordPress. But Droip, a relatively new no-code website builder, steps in with a completely different philosophy. It is built to provide Webflow and Framer-level power in WordPress, complete design freedom, built-in performance, and no reliance on third-party plugins. In this review, weâre putting Droip head-to-head with traditional builders according to all the things that matter when choosing a website builder: Price, Affect on website performance, User-friendliness vs flexibility, Features, Theme and layout options. What Is Droip? Droip is a no-code visual website builder for WordPress, designed to bridge the gap where other page builders fall short. Unlike other page builders, Droip is an all-in-one solution that aims to provide everything you need to build websites without any third-party dependencies, shifting from the norm in WordPress! And the best part? Itâs all included in your subscription, so you wonât be hit with surprise upgrades. Pricing: A Smarter Investment with All Features Included While most page builders upsell critical features or require multiple add-ons, Droip keeps it simple: one platform, all features, no hidden costs. Itâs surprisingly affordable for the value it delivers. The Starter plan is just $34.50/year (currently at 50% off) for one site and includes all premium features. If you compare it with Elementor, thatâs almost half the cost of Elementor Proâs Essential plan, which starts at $60/year and still keeps several essentials behind paywalls. Droip also has a Lifetime plan. For a one-time payment of $299.50, you get unlimited use, forever. No renewals, no upcharges. All Droip Pro plans are fully featured from the start. You donât need to stack plugins or pay extra to unlock dynamic content support, pop-up builders, or submission forms. You also get access to the entire growing template library from day one. Note: Explore Droip pricing. Website Performance Comparison Performance directly impacts user experience, SEO, and conversion rates. So, to get a clear picture of how different page builders impact performance, we put Droip and Elementor to the test under identical conditions to see how each builder stacks up. We installed both on a clean WordPress setup using the default Twenty Twenty-Five theme to ensure a fair comparison. Then, we created identical layouts using comparable design elements and ran Lighthouse performance audits to measure load time, responsiveness, and Core Web Vitals. Test Conditions: Clean WordPress installation. Same theme: Twenty Twenty-Five. Same layout structure and design elements. Lighthouse is used for performance scoring. Sample Layout Droipâs Performance Elementorâs Performance Droipâs Code Output Elementorâs Code Output The difference was immediately clear. Droip generated a much cleaner DOM with significantly fewer <div>s and no unnecessary wrappers, resulting in faster load times and higher scores across all boards. Elementor, on the other hand, added heavily nested markup and extra scripts, even on this simple layout, which dragged down its performance. If clean code, fast loading, and technical efficiency are priorities for you, Droip clearly comes out ahead. Exploring The Features Now that weâve seen how Droip outperforms the competition and does it at a highly competitive price, letâs dive into the features to see what makes it such a powerful all-in-one builder. Freeform Visual Canvas For True Design Freedom What makes Droip different from the existing page builders is its freeform visual canvas. With Droip, you finally get the layout flexibility modern design demands and no longer need to place elements into rigid structures. The editor is powerful, modern, and feels more like designing in a modern interface tool like Figma. You can place elements exactly where you want, overlap sections, layer backgrounds, and create complex animations & interactions all visually. Every elementâs layout behavior is editable on canvas, giving you pixel-level control without touching code. The editor supports both light and dark modes for a more comfortable, focused workspace. If you've used Figma or Webflow, you'll feel instantly at home. If you haven't, this is the most natural way to design websites you've ever tried. Instant Figma to Droip Handoff Talking about Figma, if you have a design ready in Figma, you can instantly import it into Droip to a functional website with no need to rebuild from scratch. Seamless import of Figma designs directly into Droip for fast development. (Large preview) Your imported design comes in fully responsive by default, adapting to all screen sizes, including any custom breakpoints you define. And it supports unlimited breakpoints, too. You can define layout behavior exactly how you want it, and styles will cascade intelligently across smaller screens. No Third-Party Plugins Needed For Dynamic Content In traditional WordPress, handling dynamic content means installing the ACF or other third-party plugins. But with Droip, all of that is natively integrated. It comes with a powerful Dynamic Content Manager that lets you: Create custom content types and fields. Use reference and multi-reference relationships. Build dynamic templates visually. Add dynamic SEO to template pages. Apply advanced filtering to Collection elements. All without writing a single line of code or relying on external plugins. Reusable Styling With Class-Based Editing Droip also has an efficient way to manage design at scale without repetitive work. It uses a class-based styling system that brings structure and scalability to your design process. When you style an element, those styles are automatically saved as reusable CSS classes. Hereâs what that means for you: You can create global classes for common components like buttons, cards, or headings. Reuse those styles across pages and projects with consistency. Update a class once, and every instance updates instantly. You can also create subclasses to make slight variations, like secondary buttons, while still inheriting styles from the parent. CSS Variables For Global Styling Droip takes styling even further with Global Variables, allowing you to define design tokens like colors, fonts, spacing, and sizing that can be reused across your entire site. You can pair these global variables with your class-based structure to: Maintain visual consistency; Update values globally with a single change; Easily manage themes like switching between light and dark modes with one click. And while Droip offers a fully visual experience, it doesnât limit advanced users. You can write custom CSS for any class or element, and even inject JavaScript at the page or element level when needed. Build Complex Interactions and Animations Visually When it comes to modern animations and interactive design, Droip leaves traditional WordPress page builders far behind. Its fully visual interaction builder lets you create dynamic, immersive experiences. You can build scroll-based animations, hover and click effects, interactive sections that respond across devices, and control visibility, motion, and behavior all within a visual interface. For advanced users, Droip includes a timeline-based editor where you can: Create multi-step animations; Fine-tune transitions with precise timing, easing, delays, and sequencing. Even text animations get special attention. You can animate text by character, word, or full element. Choose custom triggers (scroll, hover, load, and so on) and select from various transition styles or create your own. Droip's no-code website builder truly helps you move past generic and create unique animations and complex interactions. Seamless Integration Management With Droip Apps Droip takes the hassle out of connecting third-party tools with its intuitive Droip Apps system. You can install and manage essential integrations such as analytics, CRMs, email marketing platforms, support widgets, and more, all from within the Droip editor itself. This centralized approach means you never have to leave your workspace. The clean, user-friendly interface guides you through the connection process visually, making setup fast and straightforward even if youâre not a technical expert. Accessibility Is Core To The Experience One of Droipâs standout features is its built-in focus on accessibility from day one. Unlike many platforms that rely on third-party plugins for accessibility, Droip integrates it directly into the core experience. Whether you need to enlarge editor text, reduce motion effects, use a larger cursor, or work with color-blindâfriendly palettes, Droip ensures an inclusive editing environment. But it doesnât stop at editor settings. Droip actively helps you follow best accessibility practices, enforcing semantic HTML, prompting for proper alt text, and supporting ARIA labels. Plus, its built-in contrast checker ensures your designs arenât just visually appealing, theyâre easy to read and use for everyone. Team Collaboration Made Easy Collaboration is also a core part of the experience, thoughtfully designed to support teams, clients, and developers alike. With Droipâs Role Manager, you can define exactly what each role can view, edit, or manage within the builder. You can assign custom roles to team members based on their responsibilities, like designers, developers, content editors, clients, and so on. For handling client reviews, it also generates a shareable view-only link that gives clients access to preview the site without giving them edit permissions or exposing the backend. Perfect for gathering feedback and approvals while maintaining full control. Built-in Quality Control Before you publish your site, Droip helps ensure your site is technically sound with its built-in Page Audit tool. It automatically scans your layout for: Missing alt text on images, Broken links, Unassigned or duplicate classes, Accessibility issues, And more. So youâre not just building beautiful pages, youâre shipping fast, accessible, SEO-ready websites with confidence. Theme & Layout Options Droip has a growing library of high-quality templates and modular layout options, so youâre never out of options. Template Kits: Full Website Packs Droipâs Template Kits include complete multi-page website designs for every industry. Pick a template, update the content, and youâre ready to launch. New template kits are added regularly, so you're always equipped with the latest design trends. And the best part? At no additional cost. You get access to the finest designs without ever paying extra. Pre-Designed Pages Do you need just a landing page or a pricing page? Droip also offers standalone pre-designed pages you can drop into your project and customize instantly. Pre-Made Sections Prefer to build from scratch but donât want to start with a blank canvas? It also has ready-made sections like hero banners, testimonials, pricing blocks, and FAQs. You can visually assemble your layout in minutes using these. Wireframes You can also map out your layout using wireframes before applying any styling. Itâs a great way to get your content and structure right without distractions, perfect for planning UX and content flow. How Easy Is Droip to Use? If you want something dead simple and just need to build a basic site fast, there are other options like Elementor that can do that, but at the cost of power, performance, and flexibility. Droip, on the other hand, has a bit of a learning curve. Thatâs because itâs way more powerful and is built for those who care about design control, clean output, and scalability. If youâre someone who wants to fine-tune every pixel, build advanced layouts, and doesnât mind a learning curve, youâll appreciate the level of control it offers. Having said that, itâs not hard to use once you understand how it works. The learning curve, especially for complete beginners, mostly comes from understanding its powerful features like dynamic content, reusable components (called Symbols), styling logic using classes, global variables, and breakpoints, advanced interactions using custom animation timelines, etc. But to help you get up to speed quickly, Droip includes: Guided onboarding to walk you through the essentials. A growing library of templates, pages, UI components, and wireframes to kickstart your projects. An AI Generator that can scaffold entire pages and layouts in seconds. Detailed documentation and video tutorials (with more added regularly). What Users Are Saying For many users, Droip is more than just a builder. Itâs the all-in-one tool WordPress has been waiting for. They are calling it the future of WordPress, a truly great alternative to tools like Framer and Webflow. TL;DR: Why Droip Outshines Traditional Builders All-in-one builder with no third-party bloat. Clean, performance-optimized code output. Figma integration + modern visual canvas. Dynamic content, advanced interactions, and global styling. One price, all features, no hidden costs. Overall Verdict: Is Droip Really Better Than Alternatives? After putting Droip through its paces, the answer is a clear yes. Droip not only matches traditional WordPress page builders where it counts, but it surpasses them in nearly every critical area. From its cleaner, faster code output and outstanding performance to its unparalleled design freedom and powerful built-in features, Droip solves many of the pain points that users have accepted for years. Its all-in-one approach eliminates the need for multiple plugins, saving time, money, and technical headaches. While there is a learning curve for beginners, the payoff is huge for those who want full control, scalability, and a truly modern web design experience inside WordPress. If youâre serious about building high-quality, scalable, and visually stunning websites, Droip isnât just an alternative; itâs the future of WordPress site building. Ready to experience the difference yourself? Try Droip today and start building faster, cleaner, and smarter.
Design Guidelines For Better Notifications UX
As always in design, timing matters, and so do timely notifications. Letâs explore how we might improve the notifications UX. More design patterns in our Smart Interface Design Patterns, a friendly video course on UX and design patterns by Vitaly â from complex data tables and nested filters to FAQs and error messages.
In many products, setting notification channels on mute is a default, rather than an exception. The reason for that is their high frequency, which creates disruptions and eventually notification fatigue, when any popping messages get dismissed instantly. There is a good reason for it: high frequency of notifications. In usability testing, itâs the most frequent complaint, yet every app desperately tries to capture a glimpse of our attention, sending more notifications our way. Letâs see how we could make the notifications UX slightly better. This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns đŁ â with live UX training coming up soon. Jump to table of contents. The Many Faces Of Notifications Notifications are distractions by nature; they bring a userâs attention to a (potentially) significant event they arenât aware of or might want to be reminded of. As such, they can be very helpful and relevant, providing assistance and bringing structure and order to the daily routine. Until they are not. Not every communication option is a notification. As Kim Salazar rightfully noted, âStatus communication often relies on validation, status indicators, and notifications. While they are often considered to be similar, they are actually quite different.â In general, notifications can be either informational (calendar reminders, delay notifications, election night results) or encourage action (approve payment, install an update, confirm a friend request). They can stream from various sources and have various impacts. UI notifications appear as subtle cards in UIs as users interact with the web interface â as such, they are widely accepted and less invasive than some of their counterparts. In-browser push notifications are more difficult to dismiss, and draw attention to themselves even if the user isnât accessing the UI. In-app notifications live within desktop and mobile apps, and can be as humble as UI notifications, but can take a more central role with messages pushed to the home screen or the notifications center. OS notifications such as software updates or mobile carrier changes also get in the mix, often appearing together with a wide variety of notes, calendar updates, and everything in between. Finally, notifications can find their way into email, SMS, and social messaging apps, coming from chatbots, recommendation systems, and actual humans. But we donât pay the same amount of attention to every notification. It can take weeks until they eventually install a software update prompted by their OS notification, or just a few hours to confirm or decline a new LinkedIn request. Not Every Notification Is Equal The level of attention users grant to notifications depends on their nature, or, more specifically, how and when notifications are triggered. People care more about new messages from close friends and relatives, bank transactions and important alerts, calendar notifications, and any actionable and awaited confirmations or releases. People care less about news updates, social feed updates, announcements, new features, crash reports, promotional and automated messages in general. Most importantly, a message from another human being is always valued much higher than any automated notification. Design For Levels Of Severity As Sara Vilas suggests, we can break down notification design across three levels of severity: high, medium, and low attention. And then, notification types need to be further defined by specific attributes on those three levels, whether they are alerts, warnings, confirmations, errors, success messages, or status indicators. High Attention Alerts (immediate attention required), Errors (immediate action required), Exceptions (system anomalies, something didnât work), Confirmations (potentially destructive actions that need user confirmation to proceed). Medium Attention Warnings (no immediate action required), Acknowledgments (feedback on user actions), Success messages. Low Attention Informational messages (aka passive notifications, something is ready to view), Badges (typically on icons, signifying something new since last interaction), Status indicators (system feedback). Taking it one step further, we can map the attention against the type of messaging we are providing â very similar to Zendesk's mapping tone above, which plots impact against the type of messaging, and shows how the tone should adjust â becoming more humble, real, distilled or charming. So, notifications can be different, and different notifications are perceived differently; however, the more personal, relevant, and timely notifications are, the higher engagement we should expect. Start Sending Notifications Slowly But Steadily Itâs not uncommon to sign up, only to realize a few moments later that the inbox is filling up with all kinds of irrelevant messages. Thatâs exactly the wrong thing to do. A study by Facebook showed that sending fewer notifications improved user satisfaction and long-term usage of a product. Initially, once the notification rate was reduced, there was indeed a loss of traffic, but it has âgradually recovered over timeâ, and after an extended period, it had fully recovered and even turned out to be a gain. A good starting point is to set up a slow default notification frequency for different types of customers. As the customer keeps using the interface, we could ask them to decide on the kind of notifications theyâd prefer and their frequency. Send notifications slowly, and over time slowly increase and/or decrease the number of notifications per type of customer. This might work much better for our retention rates. Donât Rely On Generic Defaults: Set Up Notification Modes Typically, users can opt in and opt out of every single type of notification in their settings. In general, itâs a good idea, but it can also be very overwhelming â and not necessarily clear how important each notification is. Alternatively, we could provide predefined recommended options, perhaps with a âcalm modeâ (low frequency), a âregular modeâ (medium frequency), and a âpower-user modeâ (high frequency). As time passes, the format of notifications might need adjustments as well. Rather than having notifications sent one by one as events occur, users could choose a âsummary mode,â with all notifications grouped into a single standalone message delivered at a particular time each day or every week. Thatâs one of the settings that Slack provides when it comes to notifications; in fact, the system adapts the frequency of notifications over time, too. Initially, as Slack channels can be quite silent, the system sends notifications for every posted message. As activities become more frequent, Slack recommends reducing the notification level so the user will be notified only when they are actually mentioned. Make Notification Settings A Part Of Onboarding We could also include frequency options in our onboarding design. A while back Basecamp, for example, has introduced âAlways Onâ and âWork Can Waitâ options as a part of their onboarding, so new customers can select if they wish to receive notifications as they occur (at any time), or choose specific time ranges and days when notifications can be sent. Or, the other way around, we could ask users when they donât want to be disturbed, and suspend notifications at that time. Not every customer wants to receive work-related notifications outside of business hours or on the weekend, even if their colleagues might be working extra hours on Friday night on the other side of the planet. Allow Users To Snooze Or Pause Notifications Userâs context changes continuously. If you notice an unusual drop in engagement rate, or if youâre anticipating an unusually high volume of notifications coming up (a birthday, wedding anniversary, or election night, perhaps), consider providing an option to mute, snooze, or pause notifications, perhaps for the next 24 hours. This might go very much against our intuition, as we might want to re-engage the customer if theyâve gone silent all of a sudden, or we might want to maximize their engagement when important events are happening. However, itâs easy to reach a point when a seemingly harmless notification will steer a customer away, long term. Another option would be to suggest a change of medium used to consume notifications. Users tend to associate different levels of urgency with different channels of communication. In-app notifications, push notifications, and text messages are considered to be much more intrusive than good olâ email, so when frequency exceeds a certain threshold, you might want to nudge users towards a switch from push notifications to daily email summaries. Wrapping Up As always in design, timing matters, and so do timely notifications. Start slowly, and evolve your notification frequency depending on how exactly a user actually uses the product. For every type of user, set up notification profiles: frequent users, infrequent users, one-week-experience users, one-month-experience users, and so on. And whenever possible, allow your users to snooze and mute notifications for a while. Eventually, you might even want to suggest a change in the medium used to consume notifications. And when in doubt, postpone, rather than sending through. Meet âSmart Interface Design Patternsâ You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects â with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables â with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off. Meet Smart Interface Design Patterns, our video course on interface design & UX. Video + UX Training Video only Video + UX Training $ 495.00 $ 699.00 Get Video + UX Training 25 video lessons (15h) + Live UX Training. 100 days money-back-guarantee. Video only $ 300.00$ 395.00 Get the video course 40 video lessons (15h). Updated yearly. Also available as a UX Bundle with 2 video courses.
CSS Intelligence: Speculating On The Future Of A Smarter Language
CSS has evolved from a purely presentational language into one with growing logical powers â thanks to features like container queries, relational pseudo-classes, and the `if()` function. Is it still just for styling, or is it becoming something more? Gabriel Shoyombo explores how smart CSS has become over the years, where it is heading, the challenges it addresses, whether it is becoming too complex, and how developers are reacting to this shift.
Once upon a time, CSS was purely presentational. It imperatively handled the fonts, colors, backgrounds, spacing, and layouts, among other styles, for markup languages. It was a language for looks, doing what it was asked to, never thinking or making decisions. At least, that was what it was made for when HĂ„kon Wium Lie proposed CSS in 1994, and the World Wide Web Consortium (W3C) adopted it two years later. Fast-forward to today, a lot has changed with the addition of new features, and more are on the way that shift the style language to a more imperative paradigm. CSS now actively powers complex responsive and interactive user interfaces. With recent advancements like container queries, relational pseudo-classes, and the if() function, the language once within the domains of presentations has stepped foot into the territory of logic, reducing its reliance on the language that had handled its logical aspect to date, JavaScript. This shift presents interesting questions about CSS and its future for developers. CSS has deliberately remained within the domains of styling alone for a while now, but is it time for that to change? Also, is CSS still a presentational language as it started, or is it becoming something more and bigger? This article explores how smart CSS has become over the years, where it is heading, the problems it is solving, whether it is getting too complex, and how developers are reacting to this shift. Historical Context: CSSâs Intentional Simplicity A glimpse into CSS history shows a language born to separate content from presentation, making web pages easier to manage and maintain. The first official version of CSS, CSS1, was released in 1996, and it introduced basic styling capabilities like font properties, colors, box model (padding, margin, and border), sizes (width and height), a few simple displays (none, block, and inline), and basic selectors. Two years later, CSS2 was launched and expanded what CSS could style in HTML with features like positioning, z-index, enhanced selectors, table layouts, and media types for different devices. However, there were inconsistencies within the style language, an issue CSS2.1 resolved in 2011, becoming the standard for modern CSS. It simplified web authoring and site maintenance. CSS was largely static and declarative during the years between CSS1 and CSS2.1. Developers experienced a mix of frustrations and breakthroughs for their projects. Due to the absence of intuitive layouts like Flexbox and CSS Grid, developers relied on hacky alternatives with table layouts, positioning, or floats to get around complex designs, even though floats were originally designed for text to wrap around an obstacle on a webpage, usually a media object. As a result, developers faced issues with collapsing containers and unexpected wrapping behaviour. Notwithstanding, basic styling was intuitive. A newbie could easily pick up web development today and add basic styling the next day. CSS was separated from content and logic, and as a result, it was highly performant and lightweight. CSS3: The First Step Toward Context Awareness Things changed when CSS3 rolled out. Developers had expected a single monolithic update like the previous versions, but their expectations and the reality of the latest release were unmatched. The CSS3 red carpet revealed a modular system with powerful layout tools like Flexbox, CSS Grid, and media queries, defining for the first time how developers establish responsive designs. With over 20 modules, CSS3 marked the inception of a âsmarter CSSâ. Flexboxâs introduction around 2012 provided a flexible, one-dimensional layout system, while CSS Grid, launched in 2017, took layout a step further by offering a two-dimensional layout framework, making complex designs with minimal code possible. These advancements, as discussed by Chris Coyier, reduced reliance on hacks like floats. It did not stop there. Thereâs media queries, a prominent release of CSS3, that is one of the major contributors to this smart CSS. With media queries, CSS can react to different devicesâ screens, adjusting its styles to fit the screen dimensions, aspect ratio, and orientation, a feat that earlier versions could not easily achieve. In the fifth level, it added user preference media features such as prefers-color-scheme and prefers-reduced-motion, making CSS more user-centric by adapting styles to user settings, enhancing accessibility. CSS3 marked the beginning of a context-aware CSS. Context-awareness means the ability to understand and react to the situation around you or in your environment accordingly. It means systems and devices can sense critical information, like your location, time of day, and activity, and adjust accordingly. In web development, the term âcontext-awarenessâ has always been used with components, but what drives a context-aware component? If you mentioned anything other than the componentâs styles, you would be wrong! For a component to be considered context-aware, it needs to feel its environmentâs presence and know what happens in it. For instance, for your website to update its styles to accommodate a dark mode interface, it needs to be aware of the userâs preferences. Also, to change its layout, a website needs to know the device a user is accessing it on â and thanks to user preference media queries, that is possible. Despite these features, CSS remained largely reactive. It responded to external factors like screen size (via media queries) or input states (like :hover, :focus, or :checked), but it never made decisions based on the changes in its environment. Developers typically turn to JavaScript for that level of interaction. However, not anymore. For example, with container queries and, more recently, container style queries, CSS now responds not only to layout constraints but to design intent. It can adjust based on a componentâs environment and even its parentâs theme or state. And thatâs not all. The recently specced if() function promises inline conditional logic, allowing styles to change based on conditions, all of which can be achieved without scripting. These developments suggest CSS is moving beyond presentation to handle behaviour, challenging its traditional role. New CSS Features Driving Intelligence Several features are currently pushing CSS towards a dynamic and adaptive edge, thereby making it smarter, but these two are worth mentioning: container style queries and the if() function. What Are Container Style Queries, And Why Do They Matter? To better understand what container style queries are, it makes sense to make a quick stop at a close cousin: container size queries introduced in the CSS Containment Module Level 3. Container size queries allow developers to style elements based on the dimensions of their parent container. This is a huge win for component-based designs as it eliminates the need to shoehorn responsive styles into global media queries. /* Size-based container query */ @container (min-width: 500px) { .card { flex-direction: row; } } Container style queries take it a step further by allowing you to style elements based on custom properties (aka CSS variables) set on the container. /* Style-based container query */ @container style(--theme: dark) { .button { background: black; color: white; } } These features are a big deal in CSS because they unlock context-aware components. A button can change appearance based on a --theme property set by a parent without using JavaScript or hardcoded classes. The if() Function: A Glimpse Into The Future The CSS if() function might just be the most radical shift yet. When implemented (Chrome is the only one to support it, as of version 137), it would allow developers to write inline conditional logic directly in property declarations. Think of the ternary operator in CSS. padding: if(style(--theme: dark): 2rem; else: 3rem); This hypothetical line or pseudo code, not syntax, sets the text color to white if the --theme variable equals dark, or black otherwise. Right now, the if() function is not supported in any browser, but it is on the radar of the CSS Working Group, and influential developers like Lea Verou are already exploring its possibilities. The New CSS: Is The Boundary Between CSS And JavaScript Blurring? Traditionally, the separation of concerns concerning styling was thus: CSS for how things look and JavaScript for how things behave. However, features like container style queries and the specced if() function are starting to blur the line. CSS is beginning to behave, not in the sense of API calls or event listeners, but in the ability to conditionally apply styles based on logic or context. As web development evolved, CSS started encroaching on JavaScript territory. CSS3 brought in animations and transitions, a powerful combination for interactive web development, which was impossible without JavaScript in the earlier days. Today, research proves that CSS has taken on several interactive tasks previously handled by JavaScript. For example, the :hover pseudo-class and transition property allow for visual feedback and smooth animations, as discussed in âBringing Interactivity To Your Website With Web Standardsâ. Thatâs not all. Toggling accordions and modals existed within the domains of JavaScript before, but today, this is possible with new powerful CSS combos like the <details> and <summary> HTML tags for accordions or modals with the :target pseudo-class. CSS can also handle tooltips using aria-label with content: attr(aria-label), and star ratings with radio inputs and labels, as detailed in the same article. Another article, â5 things you can do with CSS instead of JavaScriptâ, lists features like scroll-behavior: smooth for smooth scrolling and @media (prefers-color-scheme: dark) for dark mode, tasks that once required JavaScript. In the same article, you can also see that itâs possible to create a carousel without JavaScript by using the CSS scroll snapping functionality (and weâre not even talking about features designed specifically for creating carousels solely in CSS, recently prototyped in Chrome). These extensions of CSS into the JavaScript domain have now left the latter with handling only complex, crucial interactions in a web application, such as user inputs, making API calls, and managing state. While the CSS pseudo-classes like :valid and :invalid can help as error or success indicators in input elements, you still need JavaScript for dynamic content updates, form validation, and real-time data fetching. CSS now solves problems that many developers never knew existed. With JavaScript out of the way in many style scenarios, developers now have simplified codebases. The dependencies are fewer, the overheads are lower, and website performance is better, especially on mobile devices. In fact, this shift leans CSS towards a more accessible web, as CSS-driven designs are often easier for browsers and assistive technologies to process. While the new features come with a lot of benefits, they also introduce complexities that did not exist before: What happens when logic is spread across both CSS and JavaScript? How do we debug conditional styles without a clear view of what triggered them? CSS only had to deal with basic styling like colors, fonts, layouts, and spacing, which were easier for new developers to onboard. How hard does the learning curve become as these new features require understanding concepts once exclusive to JavaScript? Developers are split. While some welcome the idea of a natural evolution of a smarter, more component-aware web, others worry CSS is becoming too complex â a language originally designed for formatting documents now juggling logic trees and style computation. Divided Perspective: Is Logic In CSS Helpful Or Harmful? While the evidence in the previous section leans towards boundary-blurring, thereâs significant controversy among developers. Many modern developers argue that logic in CSS is long overdue. As web development grows more componentized, the limitations of declarative styling have become more apparent, causing proponents to see logic as a necessary evolution for a once purely styling language. For instance, in frontend libraries like React, components often require conditional styles based on props or states. Developers have had to make do with JavaScript or CSS-in-JS solutions for such cases, but the truth remains that these solutions are not right. They introduce complexity and couple styles and logic. CSS and JavaScript are meant to have standalone concerns in web development, but libraries like CSS-in-JS have ignored the rules and combined both. We have seen how preprocessors like SASS and LESS proved the usefulness of conditionals, loops, and variables in styling. Developers who do not accept the CSS in JavaScript approach have settled for these preprocessors. Nevertheless, like Adam Argyle, they voice their need for native CSS solutions. With native conditionals, developers could reduce JavaScript overhead and avoid runtime class toggling to achieve conditional presentation. âIt never felt right to me to manipulate style settings in JavaScript when CSS is the right tool for the job. With CSS custom properties, we can send to CSS what needs to come from JavaScript.â â Chris Heilmann Also, Bob Ziroll dislikes using JavaScript for what CSS is meant to handle and finds it unnecessary. This reflects a preference for using CSS for styling tasks, even when JavaScript is involved. These developers embrace CSSâs new capabilities, seeing it as a way to reduce JavaScript dependency for performance reasons. Others argue against it. Introducing logic into CSS is a slippery slope, and CSS could lose its core strengths â simplicity, readability, and accessibility â by becoming too much like a programming language. The fear is that developers run the risk of complicating the web more than it is supposed to be. âIâm old-fashioned. I like my CSS separated from my HTML; my HTML separated from my JS; my JS separated from my CSS.â â Sara Soueidan This view emphasises the traditional separation of concerns, arguing that mixing roles can complicate maintenance. Additionally, Brad Frost has also expressed skepticism when talking specifically about CSS-in-JS, stating that it, âdoesnât scale to non-JS-framework environments, adds more noise to an already-noisy JS file, and the demos/examples I have seen havenât embodied CSS best practices.â This highlights concerns about scalability and best practices, suggesting that the blurred boundary might not always be beneficial. Community discussions, such as on Stack Overflow, also reflect this divide. A question like âIs it always better to use CSS when possible instead of JS?â receives answers favouring CSS for performance and simplicity, but others argue JavaScript is necessary for complex scenarios, illustrating the ongoing debate. Donât be fooled. It might seem convenient to agree that CSS performs better than JavaScript in styling, but thatâs not always the case. A Smarter CSS Without Losing Its Soul CSS has always stood apart from full-blown programming languages, like JavaScript, by being declarative, accessible, and purpose-driven. If CSS is to grow more intelligent, the challenge lies not in making it more powerful for its own sake but in evolving it without compromising its major concern. So, what might a logically enriched but still declarative CSS look like? Letâs find out. Conditional Rules (if, @whenâŠ@else) With Carefully Introduced Logic A major frontier in CSS evolution is the introduction of native conditionals via the if() function and the @whenâŠ@else at-rules, which are part of the CSS Conditional Rules Module Level 5 specification. While still in the early draft stages, this would allow developers to apply styles based on evaluated conditions without turning to JavaScript or a preprocessor. Unlike JavaScriptâs imperative nature, these conditionals aim to keep logic ingrained in CSSâs existing flow, aligned with the cascade and specificity. More Powerful, Intentional Selectors Selectors have always been one of the major strengths of CSS, and expanding them in a targeted way would make it easier to express relationships and conditions declaratively without needing classes or scripts. Currently, :has() lets developers style a parent based on a child, and :nth-child(An+B [of S]?) (in Selectors Level 4) allows for more complex matching patterns. Together, they allow greater precision without altering CSSâs nature. Scoped Styling Without JavaScript One of the challenges developers face in component-based frameworks like React or Vue is style scoping. Style scoping ensures styles apply only to specific elements or components and do not leak out. In the past, to achieve this, you needed to implement BEM naming conventions, CSS-in-JS, or build tools like CSS Modules. Native scoped styling in CSS, via the new experimental @scope rule, allows developers to encapsulate styles in a specific context without extra tooling. This feature makes CSS more modular without tying it to JavaScript logic or complex class systems. A fundamental design question now is whether we could empower CSS without making it like JavaScript. The truth is, to empower CSS with conditional logic, powerful selectors, and scoped rules, we donât need it to mirror JavaScriptâs syntax or complexity. The goal is declarative expressiveness, giving CSS more awareness and control while retaining its clear, readable nature, and we should focus on that. When done right, smarter CSS can amplify the languageâs strengths rather than dilute them. The real danger is not logic itself but unchecked complexity that obscures the simplicity with which CSS was built. Cautions And Constraints: Why Smart Isnât Always Better The push for a smarter CSS comes with significant trade-offs alongside control and flexibility. Over the years, history has shown that adding a new feature to a language or framework, or library, most likely introduces complexity, not just for newbies, but also for expert developers. The danger is not in CSS gaining power but in how that power is implemented, taught, and used. One of CSSâs greatest strengths has always been its approachability. Designers and beginners could learn the basics quickly: selectors, properties, and values. With more logic, scoping, and advanced selectors being introduced, that learning curve steepens. The risk is a widening gap between âbasic CSSâ and âreal-world CSSâ, echoing what happened with JavaScript and its ecosystem. As CSS becomes more powerful, developers increasingly lean on tooling to manage and abstract that power, like building systems (e.g., webpack, Vite), linters and formatters, and component libraries with strict styling conventions. This creates dependencies that are hard to escape. Tooling becomes a prerequisite, not an option, further complicating onboarding and increasing setup time for projects that used to work with a single stylesheet. Also, more logic means more potential for unexpected outcomes. New issues might arise that are harder to spot and fix. Resources like DevTools will then need to evolve to visualise scope boundaries, conditional applications, and complex selector chains. Until then, debugging may remain a challenge. All of these are challenges experienced with CSS-in-JS; how much more Native CSS? Weâve seen this before. CSS history is filled with overcomplicated workarounds, like tables for the layout before Flexbox, relying on floats with clear fix hacks, and overly rigid grid systems before native CSS Grid. In each case, the hacky solution eventually became the problem. CSS got better not by mimicking other languages but by standardising thoughtful, declarative solutions. With the right power, we can make CSS better at the end of the day. Conclusion We just took a walk down the history lane of CSS, explored its presence, and peeked into what its future could be. We can all agree that CSS has come a long way from a simple, declarative language to a dynamic, context-aware, and, yes, smarter language. The evolution, of course, comes with tension: a smarter styling language with fewer dependencies on scripts and a complex one with a steeper learning curve. This is what I conclude: The future of CSS shouldnât be a race to add logic for its own sake. Instead, it should be a thoughtful expansion, power balanced by clarity and innovation grounded in accessibility. That means asking tough questions before shipping new features. It means ensuring that new capabilities help solve actual problems without introducing new barriers.
Turning User Research Into Real Organizational Change
Bridging the gap between user research insights and actual organizational action â with a clear roadmap for impact.
This article is a sponsored by Lyssna Weâve all been there: you pour your heart and soul into conducting meticulous user research. You gather insightful data, create detailed reports, and confidently deliver your findings. Yet, months later, little has changed. Your research sits idle on someoneâs desk, gathering digital dust. It feels frustrating, like carefully preparing a fantastic meal, only to have it left uneaten. There are so many useful tools (like Lysnna) to help us run incredible user research, and articles about how to get the most from them. However, thereâs much less guidance about ensuring our user research gets adopted and brings about real change. So, in this post, I want to answer a simple question: How can you make sure your user research truly transforms your organization? Introduction User research is only as valuable as the impact it has. When research insights fail to make their way into decisions, teams miss out on opportunities to improve products, experiences, and ultimately, business results. In this post, weâll look at: Why research often fails to influence organizational change; How to ensure strategic alignment so research matters from day one; Ways to communicate insights clearly so stakeholders stay engaged; How to overcome practical implementation barriers; Strategies for realigning policies and culture to support research-driven changes. By covering each of these areas, youâll have a clear roadmap for turning your hard-won research into genuine action. Typical Reasons For Failure If youâve ever felt your research get stuck, it probably came down to one (or more) of these issues. Strategic Misalignment When findings arenât tied to business objectives or ROI, they struggle to gain traction. Sharing a particular hurdle that users face will fall on deaf ears if stakeholders cannot see how that problem will impact their bottom line. Research arriving too late is another hurdle. If you share insights after key decisions are made, stakeholders assume your input wonât change anything. Finally, research often competes with other priorities. Teams might have limited resources and focus on urgent deadlines rather than long-term user improvements. Communication Issues Even brilliant research can get lost in translation if itâs buried in dense reports. Iâve seen stakeholders glaze over when handed 30-page documents full of jargon. When key takeaways arenât crystal clear, decision-makers canât quickly act on your findings. Organizational silos can make communication worse. Marketing might have valuable insights that product managers never see, or designers may share findings that customer support doesnât know how to use. Without a way to bridge those gaps, research lives in a vacuum. Implementation Challenges Great insights require a champion. Without a clear owner, research often lives with the person who ran it, and no one else feels responsible. Stakeholder skepticism also plays a role. Some teams doubt the methods or worry the findings donât apply to real customers. Even if there is momentum, insufficient follow-up or progress tracking can stall things. Iâve heard teams say, âWe started down that path but ran out of time.â Without regular check-ins, good ideas fade away. Policy And Cultural Barriers Legal, compliance, or tech constraints can limit what you propose. I once suggested a redesign to comply with new accessibility standards, but the existing technical stack couldnât support it. Resistance due to established culture is also common. If a companyâs used to launching fast and iterating later, they might see research-driven change as slowing them down. Now that we understand what stands in the way of effective research implementation, letâs explore practical solutions to overcome these challenges and drive real organizational change. Ensuring Strategic Alignment When research ties directly to business goals, it becomes impossible to ignore. Hereâs how to do it. Early Stakeholder Engagement Invite key decision-makers into the research planning phase. I like to host a kickoff session where we map research objectives to specific KPIs, like increasing conversions by 10% or reducing support tickets by 20%. When your stakeholders help shape those objectives, theyâre more invested in the results. Research Objectives Aligned With Business KPIs While UX designers often focus on user metrics like satisfaction scores or task completion rates, itâs crucial to connect our research to business outcomes that matter to stakeholders. Start by identifying the key business metrics that will demonstrate the value of your research: Identify which metrics matter most to the organization (e.g., conversion rate, churn, average order value). Frame research questions to directly address those metrics. Make preliminary hypotheses about how insights may affect the bottom line. Develop Stakeholder-Specific Value Propositions When presenting user research to groups, itâs easy to fall into the trap of delivering a one-size-fits-all message that fails to truly resonate with anyone. Instead, we need to carefully consider how different stakeholders will receive and act on our findings. The real power of user research emerges when we can connect our insights directly to what matters most for each specific audience: For the product team: Show how insights can reduce development time by eliminating guesswork. For marketing: Demonstrate how understanding user language can boost ad copy effectiveness. For executives: Highlight potential cost savings or revenue gains. ROI Framework Development Stakeholders want to see real numbers. Develop simple templates to estimate potential cost savings or revenue gains. For example, if you uncover a usability issue thatâs causing a 5% drop-off in the signup flow, translate that into lost revenue per month. I also recommend documenting success stories from similar projects within your own organization or from case studies. When a stakeholder sees that another company boosted revenue by 15% after addressing a UX flaw, theyâre more likely to pay attention. Research Pipeline Integration Integrate research tasks directly into your product roadmap. Schedule user interviews or usability tests just before major feature sprints. That way, findings land at the right moment â when teams are making critical decisions. Regular Touchpoints with Strategic Teams Itâs essential to maintain consistent communication with strategic teams through regular research review meetings. These sessions provide a dedicated space to discuss new insights and findings. To keep everyone aligned, stakeholders should have access to a shared calendar that clearly marks key research milestones. Using collaborative tools like Trello boards or shared calendars ensures the entire team stays informed about the research plan and progress. Resource Optimization Research doesnât have to be a massive, months-long effort each time. Build modular research plans that can scale. If you need quick, early feedback, run a five-user usability test rather than a full survey. For deeper analysis, you can add more participants later. Addressing Communication Issues Making research understandable is almost as important as the research itself. Letâs explore how to share insights so they stick. Create Research One-Pagers Condense key findings into a scannable one-pager. No more than a single sheet. Start with a brief summary of the problem, then highlight three to five top takeaways. Use bold headings and visual elements (charts, icons) to draw attention. Implement Progressive Disclosure Avoid dumping all details at once. Start with a high-level executive summary that anyone can read in 30 seconds. Then, link to a more detailed section for folks who want the full methodology or raw data. This layered approach helps different stakeholders absorb information at their own pace. Use Visual Storytelling Humans are wired to respond to stories. Transform data into a narrative by using journey maps, before/after scenarios, and user stories. For example, illustrate how a user feels at each step of a signup process, then show how proposed changes could improve their experience. Regular Stakeholder Updates Keep the conversation going. Schedule brief weekly or biweekly âresearch highlightsâ emails or meetings. These should be no more than five minutes and focus on one or two new insights. When stakeholders hear snippets of progress regularly, research stays top of mind. Interactive Presentations Take research readouts beyond slide decks. Host workshop-style sessions where stakeholders engage with findings hands-on. For instance, break them into small groups to discuss a specific persona and brainstorm solutions. When people physically interact with research (sticky notes, printed journey maps), they internalize it better. Overcome Implementation Challenges Now that stakeholders understand and value your research, letâs make sure they turn insights into action. Establish Clear Ownership Assign a dedicated owner for each major recommendation. Use a RACI matrix to clarify whoâs Responsible, Accountable, Consulted, and Informed. I like to share a simple table listing each initiative, the person driving it, and key milestones. When everyone knows whoâs accountable, progress is more likely. RACI Matrix Example Initiative Responsible Accountable Consulted Informed Redesign Signup Flow UX Lead Product Manager Engineering, Legal Marketing, Support Create One-Pager Templates UX Researcher Design Director Stakeholder Team All Departments Build Implementation Roadmaps Break recommendations down into phases. For example, Phase 1: Quick usability tweaks (1â2 weeks). Phase 2: Prototype new design (3â4 weeks). Phase 3: Launch A/B test (2â3 weeks). Each phase needs clear timelines, success metrics, and resources identified upfront. Address Stakeholder Skepticism Be transparent about your methods. Share your recruitment screeners, interview scripts, and a summary of analysis steps. Offer validation sessions where stakeholders can ask questions about how the data was collected and interpreted. When they understand the process, they trust the findings more. Create Support Systems Even when stakeholders agree, they need help executing. Establish mentorship or buddy programs where experienced researchers or designers guide implementation. Develop training materials, like short âhow-toâ guides on running usability tests or interpreting survey data. Set up feedback channels (Slack channels, shared docs) where teams can ask questions or share roadblocks. Monitor And Track Progress Establish regular progress reviews weekly or biweekly. Use dashboards to track metrics such as A/B test performance, error rates, or user satisfaction scores. Even a more complicated dashboard can be built using no-code tools and AI, so you no longer need to rely on developer support. Realign Policies and Culture Even the best strategic plans and communication tactics can stumble if policies and culture arenât supportive. Hereâs how to address systemic barriers. Create a Policy Evolution Framework First, audit existing policies for anything that blocks research-driven changes. Maybe your data security policy requires months of legal review before you can recruit participants. Document those barriers and work with legal or compliance teams to create flexible guidelines. Develop a process for policy exception requests â so if you need a faster path for a small study, you know how to get approval without massive delays. Technical Infrastructure Adaptation Technology can be a silent killer of good ideas. Before proposing changes, work with IT to understand current limitations. Document technical requirements clearly so teams know whatâs feasible. Propose a phased approach to any necessary infrastructure updates. Start with small changes that have an immediate impact, then plan for larger upgrades over time. Build Cultural Buy-In Culture shift doesnât happen overnight. Share quick wins and success stories from early adopters in your organization. Recognize and reward change pioneers. Send a team-wide shout-out when someone successfully implements a research-driven improvement. Create a champions network across departments, so each area has at least one advocate who can spread best practices and encourage others. Develop a Change Management Strategy Change management is about clear, consistent communication. Develop tailored communication plans for different stakeholder groups. For example, executives might get a one-page impact summary, while developers get technical documentation and staging environments to test new designs. Establish feedback channels so teams can voice concerns or suggestions. Finally, provide change management training for team leaders so they can guide their direct reports through transitions. Measure Cultural Impact Culture can be hard to quantify, but simple pulse surveys go a long way. Ask employees how they feel about recent changes and whether they are more confident using data to make decisions. Track employee engagement metrics like survey participation or forum activity in research channels. Monitor resistance patterns (e.g., repeated delays or rejections) and address the root causes proactively. Conclusions Transforming user research into organizational change requires a holistic approach. Hereâs what matters most: Strategic Alignment: Involve stakeholders early, tie research to KPIs, and integrate research into decision cycles. Effective Communication: Use one-pagers, progressive disclosure, visual storytelling, regular updates, and interactive presentations to keep research alive. Implementation Frameworks: Assign clear ownership, build phased roadmaps, address skepticism, offer support systems, and track progress. Culture and Policy: Audit and update policies, adapt infrastructure gradually, foster cultural buy-in, and employ change management techniques. When you bring all of these elements together, research stops being an isolated exercise and becomes a driving force for real, measurable improvements. Keep in mind: Early stakeholder engagement drives buy-in. Clear research-to-ROI frameworks get attention. Ongoing, digestible communication keeps momentum. Dedicated ownership and phased roadmaps prevent stalls. Policy flexibility and cultural support enable lasting change. This is an iterative, ongoing process. Each success builds trust and opens doors for more ambitious research efforts. Be patient, stay persistent, and keep adapting. When your organization sees research as a core driver of decisions, youâll know youâve truly succeeded.
Articles for people who make web sites.
Design Dialects: Breaking the Rules, Not the System
"Language is not merely a set of unrelated sounds, clauses, rules, and meanings; it is a totally coherent system bound to context and behavior." â Kenneth L. Pike
The web has accents. So should our design systems.
Design Systems as Living LanguagesDesign systems aren't component librariesâtheyâre living languages. Tokens are phonemes, components are words, patterns are phrases, layouts are sentences. The conversations we build with users become the stories our products tell.
But hereâs what we've forgotten: the more fluently a language is spoken, the more accents it can support without losing meaning. English in Scotland differs from English in Sydney, yet both are unmistakably English. The language adapts to context while preserving core meaning. This couldnât be more obvious to me, a Brazilian Portuguese speaker, who learned English with an American accent, and lives in Sydney.
Our design systems must work the same way. Rigid adherence to visual rules creates brittle systems that break under contextual pressure. Fluent systems bend without breaking.
Consistency becomes a prison
The promise of design systems was simple: consistent components would accelerate development and unify experiences. But as systems matured and products grew more complex, that promise has become a prison. Teams file âexceptionâ requests by the hundreds. Products launch with workarounds instead of system components. Designers spend more time defending consistency than solving user problems.
Our design systems must learn to speak dialects.
A design dialect is a systematic adaptation of a design system that maintains core principles while developing new patterns for specific contexts. Unlike one-off customizations or brand themes, dialects preserve the systemâs essential grammar while expanding its vocabulary to serve different users, environments, or constraints.
When Perfect Consistency FailsAt Booking.com, I learned this lesson the hard way. We A/B-tested everythingâcolor, copy, button shapes, even logo colors. As a professional with a graphic design education and experience building brand style guides, I found this shocking. While everyone fell in love with Airbnbâs pristine design system, Booking grew into a giant without ever considering visual consistency.
The chaos taught me something profound: consistency isnât ROI; solved problems are.
At Shopify. Polaris (https://polaris-react.shopify.com/) was our crown jewelâa mature design language perfect for merchants on laptops. As a product team, we were expected to adopt Polaris as-is. Then my fulfillment team hit an âOh, Ship!â moment, as we faced the challenge of building an app for warehouse pickers using our interface on shared, battered Android scanners in dim aisles, wearing thick gloves, scanning dozens of items per minute, many with limited levels of English understanding.
Task completion with standard Polaris: 0%.
Every component that worked beautifully for merchants failed completely for pickers. White backgrounds created glare. 44px tap targets were invisible to gloved fingers. Sentence-case labels took too long to parse. Multi-step flows confused non-native speakers.
We faced a choice: abandon Polaris entirely, or teach it to speak warehouse.
The Birth of a DialectWe chose evolution over revolution. Working within Polarisâs core principlesâclarity, efficiency, consistencyâwe developed what we now call a design dialect:
ConstraintFluent MoveRationaleGlare & low lightDark surfaces + light textReduce glare on low-DPI screensGloves & haste90px tap targets (~2cm)Accommodate thick glovesMultilingualSingle-task screens, plain languageReduce cognitive loadResult: Task completion jumped from 0% to 100%. Onboarding time dropped from three weeks to one shift.
This wasnât customization or themingâthis was a dialect: a systematic adaptation that maintained Polarisâs core grammar while developing new vocabulary for a specific context. Polaris hadnât failed; it had learned to speak warehouse.
The Flexibility FrameworkAt Atlassian, working on the Jira platformâitself a system within the larger Atlassian systemâI pushed for formalizing this insight. With dozens of products sharing a design language across different codebases, we needed systematic flexibility so we built directly into our ways of working. The old modelâexception requests and special approvalsâwas failing at scale.
We developed the Flexibility Framework to help designers define how flexible they wanted their components to be:
TierActionOwnershipConsistentAdopt unchangedPlatform locks design + codeOpinionatedAdapt within boundsPlatform provides smart defaults, products customizeFlexibleExtend freelyPlatform defines behavior, products own presentationDuring a navigation redesign, we tiered every element. Logo and global search stayed Consistent. Breadcrumbs and contextual actions became Flexible. Product teams could immediately see where innovation was welcome and where consistency mattered.
The Decision LadderFlexibility needs boundaries. We created a simple ladder for evaluating when rules should bend:
Good: Ship with existing system components. Fast, consistent, proven.
Better: Stretch a component slightly. Document the change. Contribute improvements back to the system for all to use.
Best: Prototype the ideal experience first. If user testing validates the benefit, update the system to support it.
The key question: âWhich option lets users succeed fastest?â
Rules are tools, not relics.
Unity Beats UniformityGmail, Drive, and Maps are unmistakably Googleâyet each speaks with its own accent. They achieve unity through shared principles, not cloned components. One extra week of debate over button color costs roughly $30K in engineer time.
Unity is a brand outcome; fluency is a user outcome. When the two clash, side with the user.
Governance Without GatesHow do you maintain coherence while enabling dialects? Treat your system like a living vocabulary:
Document every deviation â e.g., dialects/warehouse.md with before/after screenshots and rationale.
Promote shared patterns â when three teams adopt a dialect independently, review it for core inclusion.
Deprecate with context â retire old idioms via flags and migration notes, never a big-bang purge.
A living dictionary scales better than a frozen rulebook.
Start Small: Your First DialectReady to introduce dialects? Start with one broken experience:
This week: Find one user flow where perfect consistency blocks task completion. Could be mobile users struggling with desktop-sized components, or accessibility needs your standard patterns donât address.
Document the context: What makes standard patterns fail here? Environmental constraints? User capabilities? Task urgency?
Design one systematic change: Focus on behavior over aesthetics. If gloves are the problem, bigger targets arenât â"breaking the systemâ"âtheyâre serving the user. Earn the variations and make them intentional.
Test and measure: Does the change improve task completion? Time to productivity? User satisfaction?
Show the savings: If that dialect frees even half a sprint, fluency has paid for itself.
Beyond the Component LibraryWeâre not managing design systems anymoreâweâre cultivating design languages. Languages that grow with their speakers. Languages that develop accents without losing meaning. Languages that serve human needs over aesthetic ideals.
The warehouse workers who went from 0% to 100% task completion didnât care that our buttons broke the style guide. They cared that the buttons finally worked.
Your users feel the same way. Give your system permission to speak their language.
An Holistic Framework for Shared Design Leadership
Picture this: Youâre in a meeting room at your tech company, and two people are having what looks like the same conversation about the same design problem. One is talking about whether the team has the right skills to tackle it. The other is diving deep into whether the solution actually solves the userâs problem. Same room, same problem, completely different lenses.
This is the beautiful, sometimes messy reality of having both a Design Manager and a Lead Designer on the same team. And if youâre wondering how to make this work without creating confusion, overlap, or the dreaded âtoo many cooksâ scenario, youâre asking the right question.
The traditional answer has been to draw clean lines on an org chart. The Design Manager handles people, the Lead Designer handles craft. Problem solved, right? Except clean org charts are fantasy. In reality, both roles care deeply about team health, design quality, and shipping great work.
The magic happens when you embrace the overlap instead of fighting itâwhen you start thinking of your design org as a design organism.
The Anatomy of a Healthy Design TeamHere's what Iâve learned from years of being on both sides of this equation: think of your design team as a living organism. The Design Manager tends to the mind (the psychological safety, the career growth, the team dynamics). The Lead Designer tends to the body (the craft skills, the design standards, the hands-on work that ships to users).
But just like mind and body arenât completely separate systems, so, too, do these roles overlap in important ways. You canât have a healthy person without both working in harmony. The trick is knowing where those overlaps are and how to navigate them gracefully.
When we look at how healthy teams actually function, three critical systems emerge. Each requires both roles to work together, but with one taking primary responsibility for keeping that system strong.
The Nervous System: People & PsychologyPrimary caretaker: Design Manager
Supporting role: Lead Designer
The nervous system is all about signals, feedback, and psychological safety. When this system is healthy, information flows freely, people feel safe to take risks, and the team can adapt quickly to new challenges.
The Design Manager is the primary caretaker here. Theyâre monitoring the teamâs psychological pulse, ensuring feedback loops are healthy, and creating the conditions for people to grow. Theyâre hosting career conversations, managing workload, and making sure no one burns out.
But the Lead Designer plays a crucial supporting role. Theyâre providing sensory input about craft development needs, spotting when someoneâs design skills are stagnating, and helping identify growth opportunities that the Design Manager might miss.
Design Manager tends to:
- Career conversations and growth planning
- Team psychological safety and dynamics
- Workload management and resource allocation
- Performance reviews and feedback systems
- Creating learning opportunities
Lead Designer supports by:
- Providing craft-specific feedback on team member development
- Identifying design skill gaps and growth opportunities
- Offering design mentorship and guidance
- Signaling when team members are ready for more complex challenges
Primary caretaker: Lead Designer
Supporting role: Design Manager
The muscular system is about strength, coordination, and skill development. When this system is healthy, the team can execute complex design work with precision, maintain consistent quality, and adapt their craft to new challenges.
The Lead Designer is the primary caretaker here. Theyâre setting design standards, providing craft coaching, and ensuring that shipping work meets the quality bar. Theyâre the ones who can tell you if a design decision is sound or if weâre solving the right problem.
But the Design Manager plays a crucial supporting role. Theyâre ensuring the team has the resources and support to do their best craft work, like proper nutrition and recovery time for an athlete.
Lead Designer tends to:
- Definition of design standards and system usage
- Feedback on what design work meets the standard
- Experience direction for the product
- Design decisions and product-wide alignment
- Innovation and craft advancement
Design Manager supports by:
- Ensuring design standards are understood and adopted across the team
- Confirming experience direction is being followed
- Supporting practices and systems that scale without bottlenecking
- Facilitating design alignment across teams
- Providing resources and removing obstacles to great craft work
Shared caretakers: Both Design Manager and Lead Designer
The circulatory system is about how information, decisions, and energy flow through the team. When this system is healthy, strategic direction is clear, priorities are aligned, and the team can respond quickly to new opportunities or challenges.
This is where true partnership happens. Both roles are responsible for keeping the circulation strong, but theyâre bringing different perspectives to the table.
Lead Designer contributes:
- User needs are met by the product
- Overall product quality and experience
- Strategic design initiatives
- Research-based user needs for each initiative
Design Manager contributes:
- Communication to team and stakeholders
- Stakeholder management and alignment
- Cross-functional team accountability
- Strategic business initiatives
Both collaborate on:
- Co-creation of strategy with leadership
- Team goals and prioritization approach
- Organizational structure decisions
- Success measures and frameworks
The key to making this partnership sing is understanding that all three systems need to work together. A team with great craft skills but poor psychological safety will burn out. A team with great culture but weak craft execution will ship mediocre work. A team with both but poor strategic circulation will work hard on the wrong things.
Be Explicit About Which System Youâre TendingWhen youâre in a meeting about a design problem, it helps to acknowledge which system youâre primarily focused on. âIâm thinking about this from a team capacity perspectiveâ (nervous system) or âIâm looking at this through the lens of user needsâ (muscular system) gives everyone context for your input.
This isnât about staying in your lane. Itâs about being transparent as to which lens youâre using, so the other person knows how to best add their perspective.
Create Healthy Feedback LoopsThe most successful partnerships Iâve seen establish clear feedback loops between the systems:
Nervous system signals to muscular system: âThe team is struggling with confidence in their design skillsâ â Lead Designer provides more craft coaching and clearer standards.
Muscular system signals to nervous system: âThe teamâs craft skills are advancing faster than their project complexityâ â Design Manager finds more challenging growth opportunities.
Both systems signal to circulatory system: âWeâre seeing patterns in team health and craft development that suggest we need to adjust our strategic priorities.â
Handle Handoffs GracefullyThe most critical moments in this partnership are when something moves from one system to another. This might be when a design standard (muscular system) needs to be rolled out across the team (nervous system), or when a strategic initiative (circulatory system) needs specific craft execution (muscular system).
Make these transitions explicit. âIâve defined the new component standards. Can you help me think through how to get the team up to speed?â or âWeâve agreed on this strategic direction. I'm going to focus on the specific user experience approach from here.â
Stay Curious, Not TerritorialThe Design Manager who never thinks about craft, or the Lead Designer who never considers team dynamics, is like a doctor who only looks at one body system. Great design leadership requires both people to care about the whole organism, even when theyâre not the primary caretaker.
This means asking questions rather than making assumptions. âWhat do you think about the teamâs craft development in this area?â or âHow do you see this impacting team morale and workload?â keeps both perspectives active in every decision.
When the Organism Gets SickEven with clear roles, this partnership can go sideways. Here are the most common failure modes Iâve seen:
System IsolationThe Design Manager focuses only on the nervous system and ignores craft development. The Lead Designer focuses only on the muscular system and ignores team dynamics. Both people retreat to their comfort zones and stop collaborating.
The symptoms: Team members get mixed messages, work quality suffers, morale drops.
The treatment: Reconnect around shared outcomes. What are you both trying to achieve? Usually itâs great design work that ships on time from a healthy team. Figure out how both systems serve that goal.
Poor CirculationStrategic direction is unclear, priorities keep shifting, and neither role is taking responsibility for keeping information flowing.
The symptoms: Team members are confused about priorities, work gets duplicated or dropped, deadlines are missed.
The treatment: Explicitly assign responsibility for circulation. Whoâs communicating what to whom? How often? Whatâs the feedback loop?
Autoimmune ResponseOne person feels threatened by the otherâs expertise. The Design Manager thinks the Lead Designer is undermining their authority. The Lead Designer thinks the Design Manager doesnât understand craft.
The symptoms: Defensive behavior, territorial disputes, team members caught in the middle.
The treatment: Remember that youâre both caretakers of the same organism. When one system fails, the whole team suffers. When both systems are healthy, the team thrives.
The PayoffYes, this model requires more communication. Yes, it requires both people to be secure enough to share responsibility for team health. But the payoff is worth it: better decisions, stronger teams, and design work thatâs both excellent and sustainable.
When both roles are healthy and working well together, you get the best of both worlds: deep craft expertise and strong people leadership. When one person is out sick, on vacation, or overwhelmed, the other can help maintain the teamâs health. When a decision requires both the people perspective and the craft perspective, youâve got both right there in the room.
Most importantly, the framework scales. As your team grows, you can apply the same system thinking to new challenges. Need to launch a design system? Lead Designer tends to the muscular system (standards and implementation), Design Manager tends to the nervous system (team adoption and change management), and both tend to circulation (communication and stakeholder alignment).
The Bottom LineThe relationship between a Design Manager and Lead Designer isnât about dividing territories. Itâs about multiplying impact. When both roles understand theyâre tending to different aspects of the same healthy organism, magic happens.
The mind and body work together. The team gets both the strategic thinking and the craft excellence they need. And most importantly, the work that ships to users benefits from both perspectives.
So the next time youâre in that meeting room, wondering why two people are talking about the same problem from different angles, remember: youâre watching shared leadership in action. And if itâs working well, both the mind and body of your design team are getting stronger.
From Beta to Bedrock: Build Products that Stick.
As a product builder over too many years to mention, I've lost count of the number of times I've seen promising ideas go from zero to hero in a few weeks, only to fizzle out within months.
Financial products, which is the field I work in, are no exception. With peopleâs real hard-earned money on the line, user expectations running high, and a crowded market, it's tempting to throw as many features at the wall as possible and hope something sticks. But this approach is a recipe for disaster. Here's why:
The pitfalls of feature-first developmentWhen you start building a financial product from the ground up, or are migrating existing customer journeys from paper or telephony channels onto online banking or mobile apps, it's easy to get caught up in the excitement of creating new features. You might think, "If I can just add one more thing that solves this particular user problem, they'll love me!" But what happens when you inevitably hit a roadblock because the narcs (your security team!) donât like it? When a hard-fought feature isn't as popular as you thought, or it breaks due to unforeseen complexity?
This is where the concept of Minimum Viable Product (MVP) comes in. Jason Fried's book Getting Real and his podcast Rework often touch on this idea, even if he doesnât always call it that. An MVP is a product that provides just enough value to your users to keep them engaged, but not so much that it becomes overwhelming or difficult to maintain. It sounds like an easy concept but it requires a razor sharp eye, a ruthless edge and having the courage to stick by your opinion because it is easy to be seduced by âthe Columbo Effectâ⊠when thereâs always âjust one more thingâŠâ that someone wants to add.
The problem with most finance apps, however, is that they often become a reflection of the internal politics of the business rather than an experience solely designed around the customer. This means that the focus is on delivering as many features and functionalities as possible to satisfy the needs and desires of competing internal departments, rather than providing a clear value proposition that is focused on what the people out there in the real world want. As a result, these products can very easily bloat to become a mixed bag of confusing, unrelated and ultimately unlovable customer experiencesâa feature salad, you might say.
The importance of bedrockSo what's a better approach? How can we build products that are stable, user-friendly, andâmost importantlyâstick?
That's where the concept of "bedrock" comes in. Bedrock is the core element of your product that truly matters to users. It's the fundamental building block that provides value and stays relevant over time.
In the world of retail banking, which is where I work, the bedrock has got to be in and around the regular servicing journeys. People open their current account once in a blue moon but they look at it every day. They sign up for a credit card every year or two, but they check their balance and pay their bill at least once a month.
Identifying the core tasks that people want to do and then relentlessly striving to make them easy to do, dependable, and trustworthy is where the gravyâs at.
But how do you get to bedrock? By focusing on the "MVP" approach, prioritizing simplicity, and iterating towards a clear value proposition. This means cutting out unnecessary features and focusing on delivering real value to your users.
It also means having some guts, because your colleagues might not always instantly share your vision to start with. And controversially, sometimes it can even mean making it clear to customers that youâre not going to come to their house and make their dinner. The occasional âopinionated user interface designâ (i.e. clunky workaround for edge cases) might sometimes be what you need to use to test a concept or buy you space to work on something more important.
Practical strategies for building financial products that stickSo what are the key strategies I've learned from my own experience and research?
- Start with a clear "why": What problem are you trying to solve? For whom? Make sure your mission is crystal clear before building anything. Make sure it aligns with your companyâs objectives, too.
- Focus on a single, core feature and obsess on getting that right before moving on to something else: Resist the temptation to add too many features at once. Instead, choose one that delivers real value and iterate from there.
- Prioritize simplicity over complexity: Less is often more when it comes to financial products. Cut out unnecessary bells and whistles and keep the focus on what matters most.
- Embrace continuous iteration: Bedrock isn't a fixed destinationâit's a dynamic process. Continuously gather user feedback, refine your product, and iterate towards that bedrock state.
- Stop, look and listen: Don't just test your product as part of your delivery processâtest it repeatedly in the field. Use it yourself. Run A/B tests. Gather user feedback. Talk to people who use it, and refine accordingly.
There's an interesting paradox at play here: building towards bedrock means sacrificing some short-term growth potential in favour of long-term stability. But the payoff is worth itâproducts built with a focus on bedrock will outlast and outperform their competitors, and deliver sustained value to users over time.
So, how do you start your journey towards bedrock? Take it one step at a time. Start by identifying those core elements that truly matter to your users. Focus on building and refining a single, powerful feature that delivers real value. And above all, test obsessivelyâfor, in the words of Abraham Lincoln, Alan Kay, or Peter Drucker (whomever you believe!!), âThe best way to predict the future is to create it.â
User Research Is Storytelling
Ever since I was a boy, Iâve been fascinated with movies. I loved the characters and the excitementâbut most of all the stories. I wanted to be an actor. And I believed that Iâd get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that thereâs an element of theater to UXâI hadnât really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholdersâthe product team and decision makersâalong and get them interested in learning more.
Think of your favorite movie. More than likely it follows a three-act structure thatâs commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.
Three-act structure in movies (© 2024 StudioBinder. Image used with permission from StudioBinder.). Use storytelling as a structure to do researchItâs sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers orâworseâtheir own opinion to make the ârightâ choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving usersâ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.
In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Letâs look at the different acts and how they align with user research.
Act one: setupThe setup is all about understanding the background, and thatâs where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. Youâre learning about what exists today, the challenges users have, and how the challenges affect themâjust like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesnât need to be a huge investment in time or money.
Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: ââWalk me through your day yesterday.â Thatâs it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, youâre doing ethnography.â According to Hall, â[This] will probably prove quite illuminating. In the highly unlikely case that you didnât learn anything new or useful, carry on with enhanced confidence in your direction.â
This makes total sense to me. And I love that this makes user research so accessible. You donât need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and itâll help you better understand them and whatâs going on in their lives. Thatâs really what act one is all about: understanding where users are coming from.
Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what youâve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And thatâs the beginning of a compelling story. Itâs the point in the plot where you realize that the main charactersâor the users in this caseâare facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users canât complete certain tasks. Or maybe they do empathize with usersâ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.
Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyoneâusers, the product, and stakeholders. Itâs like winning an Oscar in movie termsâit often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research.
This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.
Act two: conflictAct two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process thatâs tripping users up. Like act two in a movie, more issues will crop up along the way. Itâs here that you learn more about the characters as they grow and develop through this act.
Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: âAs you add more and more users, you learn less and less because you will keep seeing the same things again and again⊠After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.â
There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each userâs struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.
Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactionsâincluding surprise, agreement, disagreement, and discussions about what theyâre seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actorsâ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.
If in-person usability testing is like watching a playâstaged and controlledâthen conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldnât have in a lab environmentâand conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail thatâs often missing from remote usability tests.
Thatâs not to say that the âmoviesââremote sessionsâarenât a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see whatâs going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants canât log in or get their microphone working.
The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why theyâre problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, youâll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the storyâwhere the excitement isâbut there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at thingsâand these twists in the story can move things in new directions.
Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that youâre evaluating in the usability test arenât grounded in a solid understanding of your users (foundational research), thereâs not much to be gained by doing usability testing in the first place. Thatâs because youâre narrowing the focus of what youâre getting feedback on, without understanding the users' needs. As a result, thereâs no way of knowing whether the designs might solve a problem that users have. Itâs only feedback on a particular design in the context of a usability test.
On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you wonât know whether the thing that youâre building will actually solve that. This illustrates the importance of doing both foundational and directional research.
In act two, stakeholders willâhopefullyâget to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.
Act three: resolutionWhile the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While itâs important to have an audience for the first two acts, itâs crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear usersâ feedback together, ask questions, and discuss whatâs possible within the projectâs constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.
This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.
Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. âThe most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,â writes Duarte. âThat tension helps them persuade the audience to adopt a new mindset or behave differently.â
A persuasive story pattern.This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for âwhat isââthe problems that youâve identified. And âwhat could beââyour recommendations on how to address them. And so on and so forth.
You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when youâve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the productâthe denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!
While we are nearly at the end of this story, letâs reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research:
- Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
- Act two: Next, thereâs character development. Thereâs conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
- Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures.
The researcher has multiple roles: theyâre the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell usersâ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the productâs ills.
So the next time that youâre planning research with clients or youâre speaking to stakeholders about research that youâve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.
To Ignite a Personalization Practice, Run this Prepersonalization Workshop
Picture this. Youâve joined a squad at your company thatâs designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, youâre designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed.
Between the fantasy of getting it right and the fear of it going wrongâlike when we encounter âpersofailsâ in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seatsâthe personalization gap is real. Itâs an especially confounding place to be a digital professional without a map, a compass, or a plan.
For those of you venturing into personalization, thereâs no Lonely Planet and few tour guides because effective personalization is so specific to each organizationâs talent, technology, and market position.
But you can ensure that your team has packed its bags sensibly.
Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome.Thereâs a DIY formula to increase your chances for success. At minimum, youâll defuse your bossâs irrational exuberance. Before the party youâll need to effectively prepare.
We call it prepersonalization.
Behind the musicConsider Spotifyâs DJ feature, which debuted this past year.
https://www.youtube.com/watch?v=ok-aNnc0DkoWeâre used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.
So how do you know where to place your personalization bets? How do you design consistent interactions that wonât trip up users orâworseâbreed mistrust? Weâve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.
âFrom Big Tech to fledgling startups, weâve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a programâs ultimate track recordâand its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology effortsâturns on how effectively these prepersonalization activities play out.
Time and again, weâve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.
A personalization practice involves a multiyear effort of testing and feature development. Itâs not a switch-flip moment in your tech stack. Itâs best managed as a backlog that often evolves through three steps:
- customer experience optimization (CXO, also known as A/B testing or experimentation)
- always-on automations (whether rules-based or machine-generated)
- mature features or standalone product development (such as Spotifyâs DJ experience)
This is why we created our progressive personalization framework and why weâre field-testing an accompanying deck of cards: we believe that thereâs a base grammar, a set of ânouns and verbsâ that your organization can use to design experiences that are customized, personalized, or automated. You wonât need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.
Set your kitchen timerHow long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Hereâs a summary of our broader approach along with details on the essential first-day activities.
The full arc of the wider workshop is threefold:
- Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
- Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
- Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.
Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.
Kickstart: Whet your appetiteWe call the first lesson the âlandscape of connected experience.â It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.
Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Hereâs a list of 142 different interactions to jog your thinking.
This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, hereâs a long-form primer and a strategic framework.
Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.
Next, have your team plot each idea on the following 2Ă2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. Itâs also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.
Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio.Each team member should vote on where they see your product or service putting its emphasis. Naturally, you canât prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.
The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (Weâre pretty sure that you do: itâs just a matter of recognizing the relative size of that need and its remedy.) In our cards, weâve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.
Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.
The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group.At this point, youâve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Goodâyouâre ready to continue.
Hit that test kitchenNext, letâs look at what youâll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when youâre configuring a connected experience?
Whatâs important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization programâs regularly evolving menu.
Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan.The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating âdishesâ is the way that youâll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.
The dishes will come from recipes, and those recipes have set ingredients.
In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan. Verify your ingredientsLike a good product manager, youâll make sureâandyouâll validate with the right stakeholders presentâthat you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that youâre targeting, content and design elements, the context for the interaction, and your measure for how itâll come together.
This isnât just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team:
- compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette;
- specify a consistent set of interactions that users find uniform or familiar;
- and develop parity across performance measurements and key performance indicators too.
This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.
Compose your recipeWhat ingredients are important to you? Think of a who-what-when-why construct:
- Who are your key audience segments or groups?
- What kind of content will you give them, in what design elements, and under what circumstances?
- And for which business and user benefits?
We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.
Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below.
- Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
- Welcome automation: When thereâs a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
- Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.
A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although weâve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.
You can think of the later stages of the workshop as moving from recipes toward a cookbook in focusâlike a more nuanced customer-journey mapping. Individual âcooksâ will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.
Better kitchens require better architectureSimplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, âComplicated problems can be hard to solve, but they are addressable with rules and recipes.â
When personalization becomes a laugh line, itâs because a team is overfitting: they arenât designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AIâs output quality, for example, is indeed limited by your IA. Spotifyâs poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.
You can definitely stand the heatâŠPersonalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.
This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isnât an overnight affair. But if you use the same cookbook and shared recipes, youâll have solid footing for success. We designed these activities to make your organizationâs needs concrete and clear, long before the hazards pile up.
While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Donât squander it. The proof, as they say, is in the pudding.
The Wax and the Wane of the Web
I offer a single bit of advice to friends and family when they become new parents: When you start to think that youâve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, itâs time for solid food, potty training, and overnight sleeping. When you figure those out, itâs time for preschool and rare naps. The cycle goes on and on.
The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, Iâve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.
How we got hereI built my first website in the mid-â90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table
elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font
tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 âweb safeâ colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.
At the turn of the century, a new cycle started. Crufty code littered with table
layouts and font
tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didnât happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsersâwith additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.
Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.
These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.
The web as software platformThe symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.
At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.
This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.
Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote âOf Time and the Web.â Or check out the âWeb Design History Timelineâ at the Web Design Museum. Neal Agarwal also has a fun tour through âInternet Artifacts.â
Where we are nowIn the last couple of years, itâs felt like weâve begun to reach another major inflection point. As social-media platforms fracture and wane, thereâs been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but theyâre still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.
Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isnât browser support but simply the limits of how quickly designers and developers can learn whatâs available and how to adopt it.
Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.
If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworksâwhich used to let us adopt new techniques soonerâhave now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), thereâs often no alternative, leaving users with blank or broken pages.
Where do we go from here?Todayâs hacks help to shape tomorrowâs standards. And thereâs nothing inherently wrong with embracing hacksâfor nowâto move the present forward. Problems only arise when weâre unwilling to admit that theyâre hacks or we hesitate to replace them. So what can we do to create the future we want for the web?
Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? Whatâs the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes itâs just a hack that youâve grown accustomed to. And sometimes itâs holding you back from even better options.
Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isnât always true of third-party frameworks. Sites built with even the hackiest of HTML from the â90s still work just fine today. The same canât always be said of sites built with frameworks even after just a couple years.
Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to âmove fast and break things,â use the time saved by modern tools to consider more carefully and design with deliberation.
Always be learning. If youâre always learning, youâre also growing. Sometimes it may be hard to pinpoint whatâs worth learning and whatâs just todayâs hack. You might end up focusing on something that wonât matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.
Play, experiment, and be weird! This web that weâve built is the ultimate experiment. Itâs the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what weâre capable of.
Share and amplify. As you experiment, play, and learn, share whatâs worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what theyâve taught you.
Go forth and makeAs designers and developers for the web (and beyond), weâre responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Letâs imbue our values into the things that we create, and letâs make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that youâve mastered the web, everything will change.
Opportunities for AI in Accessibility
In reading Joe Dolsonâs recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, Iâm very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
Iâd like you to consider this a âyes⊠andâ piece to complement Joeâs post. Iâm not trying to refute any of what heâs saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, Iâm not saying that there arenât real risks or pressing issues with AI that need to be addressedâthere are, and weâve needed to address them, like, yesterdayâbut I want to take a little time to talk about whatâs possible in hopes that weâll get there one day.
Alternative textJoeâs piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results arenât great. As he rightly points out, the current state of image analysis is pretty poorâespecially for certain image typesâin large part because current AI systems examine images in isolation rather than within the contexts that theyâre in (which is a consequence of having separate âfoundationâ models for text analysis and image analysis). Todayâs models arenât trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think thereâs potential in this space.
As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt textâeven if that starting point might be a prompt saying What is this BS? Thatâs not right at all⊠Let me try to offer a starting pointâI think thatâs a win.
Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and itâll improve authorsâ efficiency toward making their pages more accessible.
While complex imagesâlike graphs and chartsâare challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Letâs suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, letâs suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:
- Do more people use smartphones or feature phones?
- How many more?
- Is there a group of people that donât fall into either of these buckets?
- How many is that?
Setting aside the realities of large language model (LLM) hallucinationsâwhere a model just makes up plausible-sounding âfactsââfor a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.
Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these toolsâ chat-based interfaces and our existing ability to manipulate images in todayâs AI tools, that seems like a possibility.
Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!
Matching algorithmsSafiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that itâs equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether itâs Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, thereâs real potential for algorithm development to help people with disabilities.
Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidateâs strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.
When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. Thatâs why diverse teams are so important.
Imagine that a social media companyâs recommendation engine was tuned to analyze who youâre following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or arenât white or arenât male who also talk about AI. If you took its recommendations, perhaps youâd get a more holistic and nuanced understanding of whatâs happening in the AI field. These same systems should also use their understanding of biases about particular communitiesâincluding, for instance, the disability communityâto make sure that they arenât recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.
Other ways that AI can helps people with disabilitiesIf I werenât trying to put this together between other tasks, Iâm sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but Iâm going to make this last section into a bit of a lightning round. In no particular order:
- Voice preservation. You may have seen the VALL-E paper or Appleâs Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. Itâs possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrigâs disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so itâs something that we need to approach responsibly, but the tech has truly transformative potential.
- Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinsonâs and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
- Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text thatâs prepped for Bionic Reading.
We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiencesâwith all their complexities (and joys and pain)âare valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.
Want a model that doesnât demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities thatâs authored by people with a range of disabilities, and make sure that thatâs well represented in the training data.
Want a model that doesnât use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models wonât be replacing human copy editors anytime soon.
Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.
I have no doubt that AI can and will harm people⊠today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.
I am a creative.
I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.
I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is differentâmy being is different.
Apologizing and qualifying in advance is a distraction. Thatâs what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After Iâve said what I came to say. Which is hard enough.
Except when it is easy and flows like a river of wine.
Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you donât work hard enough.
Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I donât tell anyone for three days. Sometimes Iâm so excited by the idea that came instantly that I blurt it out, canât help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they donât and I regret having given way to enthusiasm.
Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying weâre doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.
Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.
Donât ask about process. I am a creative.I am a creative. I donât control my dreams. And I donât control my best ideas.
I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and thereâs a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But thatâs for poets to wonder, and I am not a poet. I am a creative. And itâs for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.
Sometimes the process is avoidance. And agony. You know the clichĂ© about the tortured artist? Itâs true, even when the artist (and letâs put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.
Some people who hate being called creative may be closeted creatives, but thatâs between them and their gods. No offense meant. Your truth is true, too. But mine is for me.
Creatives recognize creatives.Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since itâs just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, thatâs done. Continue.
Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, Iâm no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips werenât even fresh.
Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that.
I am a creative. I havenât worked in advertising in 30 years, but in my nightmares, itâs my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.
I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work.
I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. Itâs just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.
I am not an artist.I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissismâbut at least we arenât in politics.
I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what followsâthe catastrophes as well as the triumphs.
I am a creative. Every word Iâve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.
I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.
I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.
Working saves me from worrying about work.I am a creative. I live in dread of my small gift suddenly going away.
I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.
I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didnât take time to review or revise. I wonât do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say.
There. I think Iâve said it.
Humility: An Essential Value
Humility, a designerâs essential valueâthat has a nice ring to it. What about humility, an office managerâs essential value? Or a dentistâs? Or a librarianâs? They all sound great. When humility is our guiding light, the path is always open for fulfillment, evolution, connection, and engagement. In this chapter, weâre going to talk about why.
That said, this is a book for designers, and to that end, Iâd like to start with a storyâwell, a journey, really. Itâs a personal one, and Iâm going to make myself a bit vulnerable along the way. I call it:
The Tale of Justinâs Preposterous PateWhen I was coming out of art school, a long-haired, goateed neophyte, print was a known quantity to me; design on the web, however, was rife with complexities to navigate and discover, a problem to be solved. Though I had been formally trained in graphic design, typography, and layout, what fascinated me was how these traditional skills might be applied to a fledgling digital landscape. This theme would ultimately shape the rest of my career.
So rather than graduate and go into print like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my senior year. I wantedânay, neededâto better understand the underlying implications of what my design decisions would mean once rendered in a browser.
The late â90s and early 2000s were the so-called âWild Westâ of web design. Designers at the time were all figuring out how to apply design and visual communication to the digital landscape. What were the rules? How could we break them and still engage, entertain, and convey information? At a more macro level, how could my values, inclusive of humility, respect, and connection, align in tandem with that? I was hungry to find out.
Though Iâm talking about a different era, those are timeless considerations between non-career interactions and the world of design. What are your core passions, or values, that transcend medium? Itâs essentially the same concept we discussed earlier on the direct parallels between what fulfills you, agnostic of the tangible or digital realms; the core themes are all the same.
First within tables, animated GIFs, Flash, then with Web Standards, div
s, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and âbrowser requirementâ pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.
For example, this iteration of my personal portfolio site (âthe pseudoroomâ) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy (now a co-founder of the creative project organizing app Milanote) on this one, where weâd first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, Iâd break it down and code it into a digital layout.
Figure 1: âthe pseudoroomâ website, hitting the sketchbook metaphor hard.Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.
From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.
Figure 2: GUI Galaxy, web standards-compliant design news portalDesign news portals were incredibly popular during this period, featuring (what would now be considered) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, youâd have a design news portal from the late 90s / early 2000s.
We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.
The siteâs backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a âbrandâ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.
Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.
Now, why am I taking you down this trip of design memory lane? Two reasons.
First, thereâs a reason for the nostalgia for that design era (the âWild Westâ era, as I called it earlier): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.
Todayâs web design has been in a period of stagnation. I suspect thereâs a strong chance youâve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images (laying the snark on heavy there), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.
Design, as itâs applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever theyâre engaging from. We must be mindful of, and respectful toward, those concernsâbut not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.
Pixel ProblemsWebsites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7.5, but 8 and 9 werenât that different.
Figure 3: A Mac OS 7.5-centric desktop.Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the userâs desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping (fonts, extensions, control panels)âhow did it also maintain cohesion amongst a group?
These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.
So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.
Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32x32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.
These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.
Figure 4: A selection of my pixel art design, 32x32 pixel canvas, 8-bit paletteThereâs one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.
This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well... it was the place to be, my friend. With respect where respect is due, GUI Galaxyâs concept was inspired by what these folks were doing.
Figure 5: The K10k websiteFor my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.
Amongst my personal work and side projectsâand now with this inclusionâin the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:
I evolvedâdevolved, reallyâinto a colossal asshole (and in just about a year out of art school, no less). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.
The casualties? My design stagnated. Its evolutionâmy evolutionâ stagnated.
I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources (and with blinders on). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.
My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.
Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the ârewardâ of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.
Always StudentsFollowing that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.
As an example, letâs talk about the Large Hadron Collider. The LHC was designed âto help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity.â Thanks, Wikipedia.
Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHCâs particle collision diagrams. These diagrams are the rendering of whatâs actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.
Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,
I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.
I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.
So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.
An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words âgrowâ and âevolveâ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.
But all that said: experience does not equal âexpert.â
As soon as we close our minds via an inner monologue of âknowing it allâ or branding ourselves a â#thoughtleaderâ on social media, the designer we are is our final form. The designer we can be will never exist.
Personalization Pyramid: A Framework for Designing with User Data
As a UX professional in todayâs data-driven landscape, itâs increasingly likely that youâve been asked to design a personalized digital experience, whether itâs a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX.
Thatâs where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started).
Growing tools for personalization: According to a Dynamic Yield survey, 39% of respondents felt support is available on-demand when a business case is made for it (up 15% from 2020).
Source: âThe State of Personalization Maturity â Q4 2021â Dynamic Yield conducted its annual maturity survey across roles and sectors in the Americas (AMER), Europe and the Middle East (EMEA), and the Asia-Pacific (APAC) regions. This marks the fourth consecutive year publishing our research, which includes more than 450 responses from individuals in the C-Suite, Marketing, Merchandising, CX, Product, and IT.
Getting StartedFor the sake of this article, weâll assume youâre already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points.
Common scenarios for starting a personalization project:
- Your organization or client purchased a content management system (CMS) or marketing automation platform (MAP) or related technology that supports personalization
- The CMO, CDO, or CIO has identified personalization as a goal
- Customer data is disjointed or ambiguous
- You are running some isolated targeting campaigns or A/B testing
- Stakeholders disagree on personalization approach
- Mandate of customer privacy rules (e.g. GDPR) requires revisiting existing user targeting practices
Regardless of where you begin, a successful personalization program will require the same core building blocks. Weâve captured these as the âlevelsâ on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful.
From the ground up: Soup-to-nuts personalization, without going nuts.From top to bottom, the levels include:
- North Star: What larger strategic objective is driving the personalization program?
- Goals: What are the specific, measurable outcomes of the program?
- Touchpoints: Where will the personalized experience be served?
- Contexts and Campaigns: What personalization content will the user see?
- User Segments: What constitutes a unique, usable audience?
- Actionable Data: What reliable and authoritative data is captured by our technical platform to drive personalization?
- Raw Data: What wider set of data is conceivably available (already in our setting) allowing you to personalize?
Weâll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. Weâve found them helpful in personalization brainstorming sessions, and will include examples for you here.
Personalization pack: Deck of cards to help kickstart your personalization brainstorming. Starting at the TopThe components of the pyramid are as follows:
North StarA north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include:
- Function: Personalize based on basic user inputs. Examples: âRawâ notifications, basic search results, system user settings and configuration options, general customization, basic optimizations
- Feature: Self-contained personalization componentry. Examples: âCookedâ notifications, advanced optimizations (geolocation), basic dynamic messaging, customized modules, automations, recommenders
- Experience: Personalized user experiences across multiple interactions and user flows. Examples: Email campaigns, landing pages, advanced messaging (i.e. C2C chat) or conversational interfaces, larger user flows and content-intensive optimizations (localization).
- Product: Highly differentiating personalized product experiences. Examples: Standalone, branded experiences with personalization at their core, like the âalgotorialâ playlists by Spotify such as Discover Weekly.
As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:
- Conversion
- Time on task
- Net promoter score (NPS)
- Customer satisfaction
Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a userâs experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples:
Channel-level Touchpoints
- Email: Role
- Email: Time of open
- In-store display (JSON endpoint)
- Native app
- Search
Wireframe-level Touchpoints
- Web overlay
- Web alert bar
- Web banner
- Web content block
- Web menu
If youâre designing for web interfaces, for example, you will likely need to include personalized âzonesâ in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.
Targeted Zones: Examples from Kibo of personalized âzonesâ on page-level wireframes occurring at various stages of a user journey (Engagement phase at left and Purchase phase at right.)Source: âEssential Guide to End-to-End Personaliztionâ by Kibo. Contexts and Campaigns
Once youâve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as âcampaignsâ (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an âEnrichâ campaign that shows related articles may be a suitable supplement to extant content).
Personalization Context Model:
- Browse
- Skim
- Nudge
- Feast
Personalization Content Model:
- Alert
- Make Easier
- Cross-Sell
- Enrich
Weâve written extensively about each of these models elsewhere, so if youâd like to read more you can check out Colinâs Personalization Content Model and Jeffâs Personalization Context Model.
Campaign and Context cards: This level of the pyramid can help your team focus around the types of personalization to deliver end users and the use-cases in which they will experience it. User SegmentsUser segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:
- Unknown
- Guest
- Authenticated
- Default
- Referred
- Role
- Cohort
- Unique ID
Every organization with any digital presence has data. Itâs a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as âdata activation.â) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience.
Source: âThe State of Personalization 2021â by Twilio. Survey respondents were n=2,700 adult consumers who have purchased something online in the past 6 months, and n=300 adult manager+ decision-makers at consumer-facing companies that provide goods and/or services online. Respondents were from the United States, United Kingdom, Australia, and New Zealand.Data was collected from April 8 to April 20, 2021.First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the âcreep factorâ of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:
Figure 1.1.2: Example of a personalization maturity curve, showing progression from basic recommendations functionality to true individualization. Credit: https://kibocommerce.com/blog/kibos-personalization-maturity-chart/There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.
While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organizationâs overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.
Pulling it TogetherWhile the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.
In assembling a card âhandâ, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlogâwhich is a fine subject for another article.
In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.
Scenario A: We want to use personalization to improve customer satisfaction on the website. For unknown users, we will create a short quiz to better identify what the user has come to do. This is sometimes referred to as âbadgingâ a user in onboarding contexts, to better characterize their present intent and context. Lay Down Your CardsAny sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no âeasy buttonâ wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.
Mobile-First CSS: Is It Time for a Rethink?
The mobile-first design methodology is greatâit focuses on what really matters to the user, itâs well-practiced, and itâs been a common design pattern for years. So developing your CSS mobile-first should also be great, tooâŠright?
Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width
media queries for larger viewports (for a good overview see âWhat is Mobile First CSS and Why Does It Rock?â). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base thatâs harder to maintain. Admit itâhow many of us willingly want that?
On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions youâre working on. To help you get started, hereâs how I go about tackling the factors you need to watch for, and Iâll discuss some alternate solutions if mobile-first doesnât seem to suit your project.
Advantages of mobile-firstSome of the things to like with mobile-first CSS developmentâand why itâs been the de facto development methodology for so longâmake a lot of sense:
Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchyâyou just focus on the mobile view and get developing.
Tried and tested. Itâs a tried and tested methodology thatâs worked for years for a reason: it solves a problem really well.
Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project).
Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!
Disadvantages of mobile-firstSetting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:
More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints.
Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.
Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.
The browser canât prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width
media queries donât leverage the browserâs capability to download CSS files in priority order.
There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We wonât be able to use a utility class for a style that has been reset with a higher specificity.
With this in mind, Iâm developing CSS with a focus on the default values much more these days. Since thereâs no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width
set).
This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a componentâs layout looks like it should be based on Flexbox at all breakpoints, itâs fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We donât want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view!
Though this approach isnât going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others.
Having said that, I donât feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheetâa sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but thatâs by no means a requirement.
Closed media query ranges in practiceIn classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (Iâm using SCSS for brevity), letâs assume there are three visual designs:
- smaller than 768
- from 768 to below 1024
- 1024 and anything larger
Take a simple example where a block-level element has a default padding
of â20px,â which is overwritten at tablet to be â40pxâ and set back to â20pxâ on desktop.
Classic min-width
mobile-first
.my-block {
padding: 20px;
@media (min-width: 768px) {
padding: 40px;
}
@media (min-width: 1024px) {
padding: 20px;
}
}
Closed media query range
.my-block {
padding: 20px;
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The subtle difference is that the mobile-first example sets the default padding
to â20pxâ and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding
to â20pxâ and only overrides it at the relevant breakpoint where it isnât the default value (in this instance, tablet is the exception).
The goal is to:
- Only set styles when needed.
- Not set them with the expectation of overwriting them later on, again and again.
To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. Weâll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited.
Taking the above example, if we find that .my-block
spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding
in a closed media query range.
.my-block {
@media (max-width: 767.98px) {
padding: 20px;
}
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The browser default padding
for our block is â0,â so instead of adding a desktop media query and using unset
or â0â for the padding
value (which we would need with mobile-first), we can wrap the mobile padding
in a closed media query (since it is now also an exception) so it wonât get picked up at wider breakpoints. At the desktop breakpoint, we wonât need to set any padding
style, as we want the browser default value.
Back in the day, keeping the number of requests to a minimum was very important due to the browserâs limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority.
With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesnât. This is more performant and can reduce the overall time page rendering is blocked.
Which HTTP version are you using?To determine which version of HTTP youâre using, go to your website and open your browserâs dev tools. Next, select the Network tab and make sure the Protocol column is visible. If âh2â is listed under Protocol, it means HTTP/2 is being used.
Note: to view the Protocol in your browserâs dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.
Note: for a summarized comparison, see ImageKitâs âHTTP/2 vs. HTTP/1.âAlso, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.
Splitting the CSSSeparating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media
attribute allows the browser to identify which files are needed immediately (because theyâre render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.
In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with âHighestâ priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case theyâll be needed later, but with âLowestâ priority.
With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.
While, as noted, with the CSS separated into different files linked and marked up with the relevant media
attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width
queries, where the desktop browser would have to download all the CSS with Highest priority. We canât assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow.
The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.
Bundled CSS
<link href="site.css" rel="stylesheet">
This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority.
Separated CSS
<link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet">
Separating the CSS and specifying a media
attribute value on each link
tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority.
Depending on the projectâs deployment strategy, a change to one file (mobile.css
, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css
file, an approach that would normally trigger a full regression test.
The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.
I donât think anyone wants to return to that development model again, but itâs important we donât lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular deviceâany deviceâover others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and whatâs an exception, seems like the natural next step. Iâve started noticing small simplifications in my own CSS, as well as other developersâ, and that testing and maintenance work is also a bit more simplified and productive.
In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first mayâor may notâturn out to be the best choice for whatâs involved, but first you need to solidly understand the trade-offs youâre stepping into.
Designers, (Re)define Success First
About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design thatâs usable and equitable; protects peopleâs privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.
Unfortunately, weâre still very far from this ideal.
At the time, I didnât know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and âdark realityâ sessions, but I didnât manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of designâwhich is far from my definition of structurally integrated.
I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that Iâve found the key that will let us structurally integrate ethics. And itâs surprisingly simple! But first we need to zoom out to get a better understanding of what weâre up against.
Influence the systemSadly, weâre trapped in a capitalistic system that reinforces consumerism and inequality, and itâs obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When weâre working for an organization that pursues âdouble-digit growthâ or âaggressive sales targetsâ (which is 99 percent of us), thatâs very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, weâre a part of the problem.
What can we do to change this?
We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:
- At the lowest level of effectiveness, you can affect numbers such as usability scores or the number of design critiques. But none of that will change the direction of a company.
- Similarly, affecting buffers (such as team budgets), stocks (such as the number of designers), flows (such as the number of new hires), and delays (such as the time that it takes to hear about the effect of design) wonât significantly affect a company.
- Focusing instead on feedback loops such as management control, employee recognition, or design-system investments can help a company become better at achieving its objectives. But that doesnât change the objectives themselves, which means that the organization will still work against your ethical-design ideals.
- The next level, information flows, is what most ethical-design initiatives focus on now: the exchange of ethical methods, toolkits, articles, conferences, workshops, and so on. This is also where ethical design has remained mostly theoretical. Weâve been focusing on the wrong level of the system all this time.
- Take rules, for exampleâthey beat knowledge every time. There can be widely accepted rules, such as how finance works, or a scrum teamâs definition of done. But ethical design can also be smothered by unofficial rules meant to maintain profits, often revealed through comments such as âthe client didnât ask for itâ or âdonât make it too big.â
- Changing the rules without holding official power is very hard. Thatâs why the next level is so influential: self-organization. Experimentation, bottom-up initiatives, passion projects, self-steering teamsâall of these are examples of self-organization that improve the resilience and creativity of a company. Itâs exactly this diversity of viewpoints thatâs needed to structurally tackle big systemic issues like consumerism, wealth inequality, and climate change.
- Yet even stronger than self-organization are objectives and metrics. Our companies want to make more money, which means that everything and everyone in the company does their best to⊠make the company more money. And once I realized that profit is nothing more than a measurement, I understood how crucial a very specific, defined metric can be toward pushing a company in a certain direction.
The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.
Redefine successTraditionally, we consider a product or service successful if itâs desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, youâll find diagrams of three equally sized, evenly arranged circles.
But in our hearts, we all know that the three dimensions arenât equally weighted: itâs viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:
Desirability and feasibility are the means; viability is the goal. Companiesâoutside of nonprofits and charitiesâexist to make money.
A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilitĂ .)
But simply swapping viable for desirable isnât enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people wantâwhether itâs good for them or not. Desirability objectives, such as user satisfaction or conversion, donât consider whether a product is healthy for people. They donât prevent us from creating products that distract or manipulate people or stop us from contributing to societyâs wealth inequality. Theyâre unsuitable for establishing a healthy balance with nature.
Thereâs a fourth dimension of success thatâs missing: our designs also need to be ethical in the effect that they have on the world.
This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What Iâve never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. Thereâs no one way to do this because it highly depends on your culture, values, and industry. But Iâll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.
Pursue well-being, equity, and sustainabilityWe created objectives that address designâs effect on three levels: individual, societal, and global.
An objective on the individual level tells us what success is beyond the typical focus of usability and satisfactionâinstead considering matters such as how much time and attention is required from users. We pursued well-being:
We create products and services that allow for peopleâs health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our usersâ time, attention, and privacy, and help them make healthy and respectful choices.
An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:
We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.
Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:
We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.
In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practicalâand even familiarâactions.
Measure impactBut defining these objectives still isnât enough. What truly caught the attention of senior management was the fact that we created a way to measure every design projectâs well-being, equity, and sustainability.
This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:
Thereâs a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:
âIf the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security.â
This phenomenon explains why desirability is a poor indicator of success: itâs typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?
Thereâs another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmnessâsay, the number of interface elementsâwe could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions.
Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, youâre forced to consider what success looks like concretely and how you can prove that youâve reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.
And finally, itâs good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)âespecially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, weâll need to speak that business language.
Practice daily ethical designOnce youâve defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It âsimplyâ becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.
I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the websiteâs end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?
The redefinition of success will completely change what it means to do good design.
There is, however, a final piece of the puzzle thatâs missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, itâs essential to engage stakeholders in a dedicated kickoff session.
Kick it off or fall back to status quoThe kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.
In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.
For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisorsâ documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from âManual of Meâ (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.
The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the projectâin terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?
Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. âAs we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y.â Compare those odds to a situation in which the team didnât agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scopeâand sheâd be right.
In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.
We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.
After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:
- the projectâs origin and purpose: why are we doing this project?
- the problem definition: what do we want to solve?
- the concrete goals and metrics for each success dimension: what do we want to achieve?
- the scope, process, and role descriptions: how will we achieve it?
With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objectiveâusing the tools, methods, and metrics at their disposal to achieve ethical outcomes.
ConclusionOver the past year, quite a few colleagues have asked me, âWhere do I start with ethical design?â My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, thereâs no skipping this step.
To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.
And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.
Otherwise, Iâm genuinely sorry to say, youâre wasting your precious time and creative energy.
Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as âWhat will the client think of this?,â âWill they take me seriously?,â and âCanât we just do it within the design team instead?â In fact, a product manager once asked me why ethics couldnât just be a structured part of the design processâto just do it without spending the effort to define ethical objectives. Itâs a tempting idea, right? We wouldnât have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.
But as systems theory tells us, thatâs not enough. For those of us who arenât from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We canât remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkitsâweâre not designing ethically. Itâs just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.
With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. Thatâs what it means to do daily ethical design.
For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, José Gallegos, Annegret Bönemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.
Breaking Out of the Box
CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.
Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.
Sketches of a round display, a common rectangular mobile display, and a device with a foldable display.These recent evolutions of the web platform made it both more challenging and more interesting to design products. Theyâre great opportunities for us to break out of our rectangular boxes.
Iâd like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).
Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, theyâre stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.
As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.
At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.
Hereâs what a typical desktop PWA app looks like:
Sketches of two rectangular user interfaces representing the desktop Progressive Web App status quo on the macOS and Windows operating systems, respectively.Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but thatâs about it.
What if we could think outside this box, and reclaim the real estate of the appâs entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.
This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.
About the title bar and window controlsLetâs start with an explanation of what the title bar and window controls are.
The title bar is the area displayed at the top of an app window, which usually contains the appâs name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the appâs window, and are also displayed at the top.
A sketch of a rectangular application user interface highlighting the title bar area and window control buttons.Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the applicationâs web content.
A sketch of a rectangular application user interface using Window Controls Overlay. The title bar and window controls are no longer in an area separated from the appâs content.If you are reading this article on a desktop computer, take a quick look at other apps. Chances are theyâre already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.
A screenshot of the top area of a browserâs user interface showing a group of tabs that share the same horizontal space as the app window controls.Spotify displays album artwork all the way to the top edge of the application window.
A screenshot of an album in Spotifyâs desktop application. Album artwork spans the entire width of the main content area, all the way to the top and right edges of the window, and the right edge of the main navigation area on the left side. The application and album navigation controls are overlaid directly on top of the album artwork.Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.
A screenshot of Microsoft Wordâs toolbar interface. Document file information, search, and other functionality appear at the top of the window, sharing the same horizontal space as the appâs window controls.The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that donât support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when itâs available.
Letâs use the featureFor the rest of this article, weâll be working on a demo app to learn more about using the feature.
The demo app is called 1DIV. Itâs a simple CSS playground where users can create designs using CSS and a single HTML element.
The app has two pages. The first lists the existing CSS designs youâve created:
A screenshot of the 1DIV app displaying a thumbnail grid of CSS designs a user created.The second page enables you to create and edit CSS designs:
A screenshot of the 1DIV app editor page. The top half of the window displays a rendered CSS design, and a text editor on the bottom half of the window displays the CSS used to create it.Since Iâve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:
Screenshots of the 1DIV app thumbnail view and CSS editor view on macOS. This version of the appâs window has a separate control bar at the top for the app name and window control buttons.And on Windows:
Screenshots of the 1DIV app thumbnail view and CSS editor view on the Windows operating system. This version of the appâs window also has a separate control bar at the top for the app name and window control buttons.Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.
Letâs use the Window Controls Overlay feature to improve this.
Enabling Window Controls OverlayThe feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.
As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.
Using Window Controls OverlayTo use the feature, we need to add the following display_override member to our web appâs manifest file:
{
"name": "1DIV",
"description": "1DIV is a mini CSS playground",
"lang": "en-US",
"start_url": "/",
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display_override": [
"window-controls-overlay"
],
"icons": [
...
]
}
On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.
However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, weâll need a bit of CSS and JavaScript code.
Here is what the app looks like now:
Screenshot of the 1DIV app thumbnail view using Window Controls Overlay on macOS. The separate top bar area is gone, but the window controls are now blocking some of the appâs interfaceThe title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.
Itâs similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:
Screenshot of the 1DIV app thumbnail display using Window Controls Overlay on the Windows operating system. The separate top bar area is gone, but the window controls are now blocking some of the appâs content. Using CSS to keep clear of the window controlsAlong with the feature, new CSS environment variables have been introduced:
titlebar-area-x
titlebar-area-y
titlebar-area-width
titlebar-area-height
You use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it wonât overlap with the window controls. In our case, weâll use two of the variables to position our header, which contains the logo, search bar, and NEW button.
header {
position: absolute;
left: env(titlebar-area-x, 0);
width: env(titlebar-area-width, 100%);
height: var(--toolbar-height);
}
The titlebar-area-x
variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width
is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesnât include the window controls.)
By doing this, we make sure our content remains fully visible. Weâre also defining fallback values (the second parameter in the env()
function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).
Now our header adapts to its surroundings, and it doesnât feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.
Changing the window controls background color so it blends inNow letâs take a closer look at our second page: the CSS playground editor.
Screenshots of the 1DIV app CSS editor view with Window Controls Overlay in macOS and Windows, respectively. The window controls overlay areas have a solid white background color, which contrasts with the hot pink color of the example CSS design displayed in the editor.Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.
We can fix this by changing the appâs theme color. There are a couple of ways to define it:
- PWAs can define a theme color in the web app manifest file using the theme_color manifest member. This color is then used by the OS in different ways. On desktop platforms, it is used to provide a background color to the title bar and window controls.
- Websites can use the theme-color meta tag as well. Itâs used by browsers to customize the color of the UI around the web page. For PWAs, this color can override the manifest
theme_color
.
In our case, we can set the manifest theme_color
to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.
The theme-color
meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.
Here is the function weâll use:
function themeWindow(bgColor) {
document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
}
With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the appâs interface.
Screenshot of the 1DIV app CSS editor view on the Windows operating system with Window Controls Overlay and updated CSS demonstrating how the window control buttons blend in with the rest of the appâs interface. Dragging the windowNow, getting rid of the title bar entirely does have an important accessibility consequence: itâs much more difficult to move the application window around.
The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.
Fortunately, this can be fixed using CSS with the app-region
property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit-
vendor prefix.
To make any element of the app become a dragging target for the window, we can use the following:
-webkit-app-region: drag;
It is also possible to explicitly make an element non-draggable:
-webkit-app-region: no-drag;
These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.
However, because the editor page doesnât display the header, users wouldnât be able to drag the window while editing code. So let's use a different approach. Weâll create another element before our header, also absolutely positioned, and dedicated to dragging the window.
<div class="drag"></div>
<header>...</header>
.drag {
position: absolute;
top: 0;
width: 100%;
height: env(titlebar-area-height, 0);
-webkit-app-region: drag;
}
With the above code, weâre making the draggable area span the entire viewport width, and using the titlebar-area-height
variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.
And, now, to make sure our search field and button remain usable:
header .search,
header .new {
-webkit-app-region: no-drag;
}
With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and weâre not breaking this expectation, which is good.
An animated view of the 1DIV app being dragged across a Windows desktop with the mouse. Adapting to window resizeIt may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldnât be enough space for the search field, logo, and button to fit, so weâd want to push them down a bit.
The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay
.
The API provides three interesting things:
navigator.windowControlsOverlay.visible
lets us know whether the overlay is visible.navigator.windowControlsOverlay.getBoundingClientRect()
lets us know the position and size of the title bar area.navigator.windowControlsOverlay.ongeometrychange
lets us know when the size or visibility changes.
Letâs use this to be aware of the size of the title bar area and move the header down if itâs too narrow.
if (navigator.windowControlsOverlay) {
navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
document.body.classList.toggle('narrow', width < 250);
});
}
In the example above, we set the narrow
class on the body
of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay
API has two advantages for our use case:
- Itâs only fired when the feature is supported and used; we donât want to adapt the design otherwise.
- We get the size of the title bar area across operating systems, which is great because the size of the window controls is different on Mac and Windows. Using a media query wouldnât make it possible for us to know exactly how much space remains.
.narrow header {
top: env(titlebar-area-height, 0);
left: 0;
width: 100%;
}
Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.
A screenshot of the 1DIV app on Windows showing the appâs content adjusted for a much narrower viewport. Thirty pixels of exciting design opportunities
Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.
In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.
More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.
So letâs embrace this. Letâs use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!
If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the specâs repository. Itâs still early in the development of this feature, and you can help make it even better. Or, you can take a look at the featureâs existing documentation, or this demo app and its source code.
How to Sell UX Research with Two Simple Questions
Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, âPlease, sir, I want some more.â
Hereâs the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think itâs their idea.
In this article, Iâll show you how to collaboratively expose misalignment and gaps in the teamâs shared understanding by bringing the team together around two simple questions:
- What are the objects?
- What are the relationships between those objects?
These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, whatâs ORCA?! Glad you asked.
ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.
The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.
The four rounds and fifteen steps of the ORCA process. In the OOUX world, we love color-coding. Blue is reserved for objects! (Yellow is for core content, pink is for metadata, and green is for calls-to-action. Learn more about the color-coded object map and connecting CTAs to objects.)I sometimes say that ORCA is a âgarbage in, garbage outâ process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you donât have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.
ORCA strengthens the weak spot between research and design by helping distill research into solid information architectureâscaffolding for the screen design and interaction design to hang on.In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.
Getting in the same curiosity-boatWhat gets us into trouble is not what we donât know. Itâs what we know for sure that just ainât so.
Mark Twain
The first two steps of the ORCA processâObject Discovery and Relationship Discoveryâshine a spotlight on the dark, dusty corners of your teamâs misalignments and any inherent complexity thatâs been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:
The original âTree Swing Project Managementâ cartoon dates back to the 1960s or 1970s and has no artist attribution we could find.This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often canât sell research: every decision-maker is confident in their own mental picture.
Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.
But how we do this is important. However much we might want to, we canât just tell everyone, âYOU ARE WRONG!â Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.
Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.
You might try to sell research by honestly saying: âWe need to understand doctors better! What are their pain points? How do they use the current app?â But hereâs the problem with that. Those questions are vague, and the answers to them donât feel acutely actionable.
Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Letâs listen in:
âWait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?â
âCan a patient even have more than one primary doctor?â
âIs it a âprimary doctorâ or just a âprimary caregiverâ⊠Canât that role be a nurse practitioner?â
âNo, caregivers are something else⊠Thatâs the patientâs family contacts, right?â
âSo are caregivers in scope for this redesign?â
âYeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note⊠Or on the appointment?â
Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidenceâgently and diplomatically.
When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.
If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.
The two questionsBut how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably?
We can do this by starting with those two big questions that align to the first two steps of the ORCA process:
- What are the objects?
- What are the relationships between those objects?
In practice, getting to these answers is easier said than done. Iâm going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these âseedâ questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.
Prep work: Noun foragingIn the next section, Iâll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.
Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.
Here are just a few great noun foraging sources:
- the productâs marketing site
- the productâs competitorsâ marketing sites (competitive analysis, anyone?)
- the existing product (look at labels!)
- user interview transcripts
- notes from stakeholder interviews or vision docs from stakeholders
Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.
As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if youâll be creating an object map later!).
Youâll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:
- Structure
- Instances
- Purpose
Think of a library app, for example. Is âbookâ an object?
Structure: can you think of a few attributes for this potential object? Title, author, publish date⊠Yep, it has structure. Check!
Instance: what are some examples of this potential âbookâ object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops⊠OK, check!
Purpose: why is this object important to the users and business? Well, âbookâ is what our library client is providing to people and books are why people come to the library⊠Check, check, check!
SIP: Structure, Instances, and Purpose! (Hereâs a flowchart where I elaborate even more on SIP.)As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickersâyour UX system is not your design system! Components are just the packaging for objectsâthey are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.
Letâs say we work for a startup disrupting the email experience. This is how Iâd start my noun foraging.
First Iâd look at my own email client, which happens to be Gmail. Iâd then look at Outlook and the new HEY email. Iâd look at Yahoo, HotmailâŠIâd even look at Slack and Basecamp and other so-called âemail replacers.â Iâd read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.
(Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just donât get lost in your own emails! Come back to me!)
Drumroll, pleaseâŠ
Here are a few nouns I came up with during my noun foraging:
- email message
- thread
- contact
- client
- rule/automation
- email address that is not a contact?
- contact groups
- attachment
- Google doc file / other integrated file
- newsletter? (HEY treats this differently)
- saved responses and templates
Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google whatâs googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.
Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:
- Record Locator
- Incentive Home
- Augmented Line Item
- Curriculum-Based Measurement Probe
This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.
Facilitate an Object Definition WorkshopYou could actually start your workshop with noun foragingâthis activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the timeâs up, come together and find the overlap. Affinity mapping is your friend here!
If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.
HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You donât need to let them know that youâre looking for gaps in the teamâs understanding so that you can prove the need for more user researchâthat will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers.
Then, let the question whack-a-mole commence.
1. What is this thing?Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.
As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an âopen questionsâ parking lot. đ
After definitions solidify, hereâs a great follow-up:
2. Do our users know what these things are? What do users call this thing?Stakeholder 1: They probably call email clients âapps.â But Iâm not sure.
Stakeholder 2: Automations are often called âworkflows,â I think. Or, maybe users think workflows are something different.
If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the usersâ language and mindset.
OK, moving on.
If you have two or more objects that seem to overlap in purpose, ask one of these questions:
3. Are these the same thing? Or are these different? If they are not the same, how are they different?You: Is a saved response the same as a template?
Stakeholder 1: Yes! Definitely.
Stakeholder 2: I donât think so⊠A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images.
Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your âopen questionsâ parking lot.
If you successfully determine that two similar things are, in fact, different, hereâs your next follow-up question:
4. Whatâs the relationship between these objects?You: Are saved responses and templates related in any way?
Stakeholder 3: Yeah, a template can be applied to a saved response.
You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?
Listen. Capture uncertainty. Once the list of âopen questionsâ grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldnât make it to the workshop. And many questions will need to be labeled âuser.â
Do you see how we are building up to our UXR sales pitch?
5. Is this object in scope?Your next question narrows the teamâs focus toward whatâs most important to your users. You can simply ask, âAre saved responses in scope for our first release?,â but Iâve got a better, more devious strategy.
By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisinglyâor not so surprisinglyâitâs not unusual for the VP to rank something like âsaved responsesâ as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.
I did this for a startup a few years ago. We posted the three groupsâ wildly different sort orders on the whiteboard.
Hereâs a snippet of the very messy middle from this session: three columns of object cards, showing the same cards prioritized completely differently by three different groups.The CEO stood back, looked at it, and said, âThis is why we havenât been able to move forward in two years.â
Admittedly, itâs tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.
Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.
6. Create a visual representation of the objectsâ relationshipsWeâve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple âhas aâ and âhas manyâ statements.
A work-in-progress system model of our new email solution.This system modeling activity brings up all sorts of new questions:
- Can a saved response have attachments?
- Can a saved response use a template? If so, if an email uses a saved response with a template, can the user override that template?
- Do users want to see all the emails they sent that included a particular attachment? For example, âshow me all the emails I sent with ProfessionalImage.jpg attached. Iâve changed my professional photo and I want to alert everyone to update it.â
Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.
Light the fuseYouâve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.
Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.
Hereâs your final step. Take those questions youâve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, âif we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?â
With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry.
Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6â8 user interviews focused specifically on these questions.
HOT TIP: if you are new to UX research, please note that youâll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and donât lead the user into any default answers.
Final words: Hold the screen design!Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?
I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world.
I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, thereâs more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots.
All the best of luck! Now go sell research!
A Content Model Is Not a Design System
Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.
But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content modelâa definition of content types, attributes, and relationships that let people and systems understand contentâwith my more familiar design-system thinking would capsize my customerâs omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content.
I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot deliveryâdesigning content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces.
A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic typesâtypes named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.
Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy canât rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.
Two essential principles for an effective content modelWe needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitiveâat least at firstâbecause it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:
- Content models must define semantics instead of layout.
- And content models should connect content that belongs together.
A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they donât help delivery channels understand the contentâs meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit.
When youâre creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.
A semantic content model has several benefits:
- Even if your team doesnât care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the websiteâs design without needing to refactor its content. In this way, content can withstand disruptive website redesigns.
- A semantic content model also provides a competitive edge. By adding structured data based on Schema.orgâs types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
- Beyond those practical benefits, youâll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.
For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.
Content models that connectAfter struggling to describe what makes a good content model, Iâve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ itemâs question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.
Think about writing an article or essay. An articleâs meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.
To illustrate, letâs look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldnât we make it as easy and as flexible as possible to add any number of tabs in the future?
Because our design-system instincts were so familiar, it felt like we had needed a content type called âtab sectionâ so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the softwareâs overview or its specifications. Another tab might provide a list of resources.
Our inclination to break down the content model into âtab sectionâ pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldnât have been understood by additional delivery channels. For example, how would another system have been able to tell which âtab sectionâ referred to a productâs specifications or its resource listâwould that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design systemâs layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.
A content model based on design components is unnecessarily complex, and itâs unintelligible to systems.We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software productâs overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on whatâs visual and familiar had obscured the intent of the designs. With a little digging, it didnât take long to realize that the concept of tabs wasnât relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.
In fact, the customer could have decided to display this content in a different wayâwithout tabsâsomewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The softwareâs product information stayed together because it wasnât sliced across separate components like âtab sectionsâ that were derived from the contentâs presentation. Any delivery channelâincluding future onesâcould understand and present this content.
A good content model connects content that belongs together so it can be easily managed and reused. ConclusionIn this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if youâre working on a content model to support an omnichannel content strategyâor even if you just want to make sure that Google and other interfaces understand your contentâremember:
- A design system isnât a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
- If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.orgâbased structured data in your website. Even if additional delivery channels arenât on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
- Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they wonât be held back by the cost of content migrations. Theyâll be able to create new designs without the obstacle of compatibility between the design and the content, and âtheyâll be ready for the next big thing.
By rigorously advocating for these principles, youâll help your team treat content the way that it deservesâas the most critical asset in your user experience and the best way to connect with your audience.
Design for Safety, An Excerpt
Antiracist economist Kim Crayton says that âintention without strategy is chaos.â Weâve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical techâbut what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.
This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech thatâs safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)
The process for inclusive safetyWhen you are designing for safety, your goals are to:
- identify ways your product can be used for abuse,
- design ways to prevent the abuse, and
- provide support for vulnerable users to reclaim power and control.
The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). Itâs a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:
- Conducting research
- Creating archetypes
- Brainstorming problems
- Designing solutions
- Testing for safety
The Process is meant to be flexibleâit wonât make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.
And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. Itâs a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.
If youâre working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.
Step 1: Conduct researchDesign research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.
Broad researchYour project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm thatâs been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.
Specific research: SurvivorsWhen possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, youâll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If youâve uncovered possible domestic violence issues, for example, the experts youâll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.
Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Donât ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. Weâll talk more about how to appropriately interview survivors in Chapter 6.
Specific research: AbusersItâs unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Donât make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.
Step 2: Create archetypesOnce youâve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as theyâre not based on real people that you interviewed and surveyed. Instead, theyâre based on your research into likely safety issues, much like when we design for accessibility: we donât need to have found a group of blind or low-vision users in our interview pool to create a design thatâs inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.
The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they donât know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.
Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses.The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetypeâs understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware theyâve been targeted in the first place and need to be alerted (Fig 5.3)?
Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their homeâs IoT devices against her, but in the face of his insistence that she simply doesnât understand how to use the products, sheâs unsure. She needs some kind of proof of the abuse.You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know itâs happening but donât know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. Youâll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.
Fig 5.4: The survivor archetype Eric Mitchell knows heâs being stalked by his ex-boyfriend Rob but canât figure out how Rob is learning his location information.It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse youâve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology thatâs being used for abuse. Later, youâll brainstorm how to prevent the abuserâs goals and assist the survivorâs goals.
And while the âabuser/survivorâ model fits most cases, it doesnât fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the childâs parents would get survivor archetype.
Step 3: Brainstorm problemsAfter creating archetypes, brainstorm novel abuse cases and safety issues. âNovelâ means things not found in your research; youâre trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You arenât worrying about how to prevent the harm yetâthat comes in the next step.
How could your product be used for any kind of abuse, outside of what youâve already identified in your research? I recommend setting aside at least a few hours with your team for this process.
If youâre looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the showâthe most wild, awful, out-of-control ways it could be used for harm. When Iâve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is greatâitâs okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.
After youâve identified as many opportunities for abuse as possible, you may still not feel confident that youâve uncovered every potential form of harm. A healthy amount of anxiety is normal when youâre doing this kind of work. Itâs common for teams designing for safety to worry, âHave we really identified every possible harm? What if weâve missed something?â If youâve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.
Itâs impossible to guarantee youâve thought of everything; instead of aiming for 100 percent assurance, recognize that youâve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.
Step 4: Design solutionsAt this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuserâs goals and to support the survivorâs goals. This step is a good one to insert alongside existing parts of your design process where youâre proposing solutions for the various problems your research uncovered.
Some questions to ask yourself to help prevent harm and support your archetypes include:
- Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
- How can you make the victim aware that abuse is happening through your product?
- How can you help the victim understand what they need to do to make the problem stop?
- Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?
In some products, itâs possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but itâs worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.
That said, use caution: you donât want to do anything that could put a user in harmâs way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. Weâll walk through a good example of this in the next chapter.
Step 5: Test for safetyThe final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point youâll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.
Ideally, safety testing happens along with usability testing. If youâre at a company that doesnât do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that donât make sense to them.
Youâll want to conduct safety testing on either your final prototype or the actual product if itâs already been released. Thereâs nothing wrong with testing an existing product that wasnât designed with safety goals in mind from the onsetââretrofittingâ it for safety is a good thing to do.
Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, youâll want to test from the perspective of each one.
As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.
Abuser testingThe goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.
For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, youâd try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.
If by the end of this youâve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.
Survivor testingSurvivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldnât be needed from the survivorâs perspective.
However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetypeâs goals would be to understand who or what is making the temperature change when they arenât doing it themselves. You could test this by looking for the thermostatâs history log and checking for usernames, actions, and times; if you couldnât find that information, you would have more work to do in step 4.
Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.
Stress testingTo make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good dayâbut real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called âstress cases,â and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.
Sustainable Web Design, An Excerpt
In the 1950s, many in the elite running community had begun to believe it wasnât possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasnât built for the task.
But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, Englandâconditions no one expected to lend themselves to record-settingâand yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes.
This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannisterâs record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.
We achieve far more when we believe that something is possible, and we will believe itâs possible only when we see someone else has already done itâand as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.
Establishing standards for a sustainable webIn most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we arenât held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.
The primary goal in sustainable web design is to reduce carbon emissions. However, itâs almost impossible to actually measure the amount of CO2 produced by a web product. We canât measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do?
If we canât measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:
- Data transfer
- Carbon intensity of electricity
Letâs take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.
Data transferMost researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.
For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. Itâs fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).
Fig 2.1: The Kinsta hosting dashboard displays data transfer alongside traffic volumes. If you divide data transfer by visits, you get the average data per visit, which can be used as a metric of efficiency.The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes.
Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as âdesktopâ and 1.77 MB for âmobile,â with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website.
History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.
Fig 2.2: The historical page weight data from HTTP Archive can teach us a lot about what is possible in the future.You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.
Designing for fast performance does often lead to reduced data transfer and emissions, but it isnât always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design.
We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website weâre replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.
If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they donât need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.
Page weight budgets are easy to track throughout a design and development process. Although they donât actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.
In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as weâll see next, since all web products demand some power, itâs important to consider the source of that electricity, too.
Carbon intensity of electricityRegardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200â400 gCO2/kWh.
Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the websiteâs data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.
We donât have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).
Fig 2.3: Tomorrowâs electricityMap shows live data for the carbon intensity of electricity by country.That said, we donât want to locate our servers too far away from our users; it takes energy to transmit data through the telecomâs networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the websiteâs core user base as âmegabyte milesââand we want it to be as small as possible.
Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we donât know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.
For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. Thatâs a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so itâs a win-win.
Converting it back to carbon emissionsIf we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.
If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.
Fig 2.4: The Website Carbon Calculator shows how the Riverford Organic website embodies their commitment to sustainability, being both low carbon and hosted in a data center using renewable energy.With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; weâre more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isnât as intuitiveâbut carbon budgets do focus our minds on the primary thing weâre trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.
Browser EnergyData transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end userâs devices, it canât offer us insights into the efficiency in any specific part of the system.
One part of the system we can look at in more detail is the energy used by end usersâ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to usersâ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the âthinkingâ work is done partly or entirely in the browser.
All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the userâs web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the userâs device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isnât just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.
In part because the tools are limited, and partly because there are so many different models of devices, itâs difficult to measure website energy consumption on end usersâ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).
Fig 2.5: The Energy Impact meter in Safari (on the right) shows how a website consumes CPU energy.You know when you load a website and your computerâs cooling fans start spinning so frantically you think it might actually take off? Thatâs essentially what this tool is measuring.
It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesnât give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.
Voice Content and Usability
Weâve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.
Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.
In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation âTo whom it may concern,â for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.
Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfacesâthe machines we conduct spoken conversations withâwe face exciting challenges as designers and content strategists.
Voice InteractionsWe interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:
- we need something done (such as a transaction),
- we want to know something (information of some sort), or
- we are social beings and want someone to talk to (conversation for conversationâs sake).
These three categoriesâwhich I call transactional, informational, and prosocialâalso characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interfaceâs first greeting and ending with the user exiting the interface. Note here that a conversation in our human senseâa chat between people that leads to some result and lasts an arbitrary length of timeâcould encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.
Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines donât yet have the capacity to really want to know how weâre doing and to do the sort of glad-handing humans crave. Thereâs also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to usersâ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be humanâpotentially alienating them in the process (http://bkaprt.com/vcu36/01-01).
That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (âbuy iced teaâ) and an informational voice interaction teaching us something new (âdiscuss a musicalâ).
Transactional voice interactionsUnless youâre tapping buttons on a food delivery app, youâre generally having a conversationâand therefore a voice interactionâwhen you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).
Alison: Hey, howâs it going?
Burhan: Hi, welcome to Crust Deluxe! Itâs cold out there. How can I help you?
Alison: Can I get a Hawaiian pizza with extra pineapple?
Burhan: Sure, what size?
Alison: Large.
Burhan: Anything else?
Alison: No thanks, thatâs it.
Burhan: Something to drink?
Alison: Iâll have a bottle of Coke.
Burhan: You got it. Thatâll be $13.55 and about fifteen minutes.
Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: theyâre direct, to the point, and economical. They quickly dispense with pleasantries.
Informational voice interactionsMeanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, weâre after much more.
Alison: Hey, howâs it going?
Burhan: Hi, welcome to Crust Deluxe! Itâs cold out there. How can I help you?
Alison: Can I ask a few questions?
Burhan: Of course! Go right ahead.
Alison: Do you have any halal options on the menu?
Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?
Alison: What about gluten-free pizzas?
Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?
Alison: Thatâs it for now. Good to know. Thanks!
Burhan: Anytime, come back soon!
This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truthâresearch expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.
Voice InterfacesAt their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesnât mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, weâre most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.
Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.
Interactive voice response (IVR) systemsThough written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.
IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (âSay Reservations to book a flight or check an itineraryâ); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and usersâ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).
While IVR systems are great for highly repetitive, monotonous conversations that generally donât veer from a single format, they have a reputation for less scintillating conversation than weâre used to in real life (or even in science fiction).
Screen readersParallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, itâs the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.
Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).
With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web âprovide mechanisms that translate visual design constructsâproximity, proportion, etc.âinto useful information,â writes Aaron Gustafson in A List Apart. âAt least they do when documents are authored thoughtfullyâ (http://bkaprt.com/vcu36/01-05).
Though deeply instructive for voice interface designers, thereâs one significant problem with screen readers: theyâre difficult to use and unremittingly verbose. The visual structures of websites and web navigation donât translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.
In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:
From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)
In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.
Voice assistantsWhen we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrettâs voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And theyâre rapidly gaining more attention from accessibility advocates for their assistive potential.
Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web âagentâ that would perform typical errands like âchecking calendars, making appointments, and finding locationsâ (http://bkaprt.com/vcu36/01-07, behind paywall). It wasnât until 2011 that Appleâs Siri finally entered the picture, making voice assistants a tangible reality for consumers.
Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Appleâs Siri and Microsoftâs Cortana couldnât be extended beyond their existing capabilities. Even today, it isnât possible to program Siri to perform arbitrary functions, because thereâs no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.
At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.
Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri.As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, theyâre also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.
Often by necessity, voice assistants like Amazon Alexa tend to be monochannelâtheyâre tightly coupled to a device and canât be accessed on a computer or smartphone instead. By contrast, many development platforms like Googleâs Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I donât prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 weâll get into some of the implications these variables might have on the way you build out your design artifacts.
Voice ContentSimply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and conciseâeverything written content isnât.
Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, weâre most concerned with content delivered auditorilyânot as an option, but as a necessity.
For many of us, our first foray into informational voice interfaces will be to deliver content to users. Thereâs only one problem: any content we already have isnât in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?
Lately, weâve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:
A dayâs weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)
Iâd update Dashâs definition of microcontent to include all examples of bite-sized content that go well beyond written communiquĂ©s. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.
As microcontent, voice content is unique because itâs an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we canât easily escape or skip, something screen reader users are all too familiar with.
Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where theyâll eventually end up, we need to ensure that our microcontent truly performs well as voice contentâand that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.
Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.
Designing for the Unexpected
Iâm not sure when I first heard this quote, but itâs something that has stayed with me over the years. How do you create services for situations you canât imagine? Or design products that work on devices yet to be invented?
Flash, Photoshop, and responsive designWhen I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.
Ethan Marcotteâs talk at An Event Apart and subsequent article âResponsive Web Designâ in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.
The fear wasnât helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you canât just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.
A new way to designDesigning responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:
.column-span-6 {
width: 49%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-4 {
width: 32%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-3 {
width: 24%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:
.logo {
@include colSpan(6);
}
.search {
@include colSpan(3);
}
.social-share {
@include colSpan(3);
}
Media queries
The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).
Components becoming too small at mobile breakpointsMedia queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on.
For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markupâsomething a small business owner might struggle with. This is because each row in the grid was defined using a div
as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.
Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.
<section class="row">
<div class="column-span-4">1 of 7</div>
<div class="column-span-4">2 of 7</div>
<div class="column-span-4">3 of 7</div>
</section>
<section class="row">
<div class="column-span-4">4 of 7</div>
<div class="column-span-4">5 of 7</div>
<div class="column-span-4">6 of 7</div>
</section>
<section class="row">
<div class="column-span-4">7 of 7</div>
</section>
Components placed in the rows of a Sass grid
Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components.
Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices youâre designing for correspond to the viewport sizes used in the pattern libraryâin the process not really hitting that âdevices that donât yet existâ goal.
Then thereâs the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?
Components responding to the viewport width with media queries Container queries: our savior or a false dawn?Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.
Components responding to their parent container with container queriesOne of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.
In other words, responsive components to replace responsive layouts.
Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.
My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?
A component library removed from context and real content is probably not the best place for that decision.
As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?
Cards responding to their parent container with container queries Cards responding based on their own contentIn this example, the dimensions of the container are not what should dictate the design; rather, the image is.
Itâs hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.
CSS is changingWhilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div
elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.
.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, 450px);
gap: 10px;
}
The repeat()
function paired with auto-fit
or auto-fill
allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and âflexâ to fill available space.
.wrapper {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
.child {
flex-basis: 32%;
margin-bottom: 20px;
}
The biggest benefit of all this is you donât need to wrap elements in container rows. Without rows, content isnât tied to page markup in quite the same way, allowing for removals or additions of content without additional development.
A traditional Grid layout without the usual row containersThis is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid.
Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below?
Cards unable to respond to a siblingâs content changesSubgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.
Cards responding to content in sibling cards.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
grid-template-rows: auto 1fr auto;
gap: 10px;
}
.sub-grid {
display: grid;
grid-row: span 3;
grid-template-rows: subgrid; /* sets rows to parent grid */
}
CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query.
Intrinsic layoutsIâd be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space.
Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that wonât ever shrink so much that they render the content illegible.
fr
units is a way to say I want you to distribute the extra space in this way, but...donât ever make it smaller than the content thatâs inside of it.
âJen Simmons, âDesigning Intrinsic Layoutsâ
Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.
Slide from âDesigning Intrinsic Layoutsâ by Jen SimmonsWhat makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation.
We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.
Another 2010 moment?This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, itâs another âeverything changedâ moment.
But it doesnât seem to be moving quite as fast; I havenât yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.
One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase.
Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isnât exactly all-new; itâs about using existing skills and existing CSS knowledge in a different way.
You canât framework your way out of a content problemAnother reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.
Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.
Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.
And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.
How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally Iâm a big fan of.
The debate about âwhether designers should codeâ is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?
Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.
Content firstContent is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.
Thankfully, thereâs more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line
and ::first-letter
help to separate design from markup so we can create designs that allow for changes.
Instead of old markup hacks like thisâ
<p>
<span class="first-line">First line of text with different styling</span>...
</p>
âwe can target content based on where it appears.
.element::first-line {
font-size: 1.4em;
}
.element::first-letter {
color: red;
}
Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min()
, max()
,
and clamp()
.
This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.
In the Sass version, directional variables need to be set.
$direction: rtl;
$opposite-direction: ltr;
$start-direction: right;
$end-direction: left;
These variables can be used as valuesâ
body {
direction: $direction;
text-align: $start-direction;
}
âor as properties.
margin-#{$end-direction}: 10px;
padding-#{$start-direction}: 10px;
However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.
margin-block-end: 10px;
padding-block-start: 10px;
There are also native start and end values for properties like text-align
, which means we can replace text-align: right
with text-align: start
.
Like the earlier examples, these properties help to build out designs that arenât constrained to one language; the design will reflect the contentâs needs.
Fixed and fluidWe briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min()
and max()
functions are a similar concept, allowing you to specify a fixed value with a flexible alternative.
For min()
this means setting a fluid minimum value and a maximum fixed value.
.element {
width: min(50%, 300px);
}
The element in the figure above will be 50% of its container as long as the elementâs width doesnât exceed 300px.
For max()
we can set a flexible max value and a minimum fixed value.
.element {
width: max(50%, 300px);
}
Now the element will be 50% of its container as long as the elementâs width is at least 300px. This means we can set limits but allow content to react to the available space.
The clamp()
function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.
.element {
width: clamp(300px, 50%, 600px);
}
This time, the elementâs width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.
With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.
Situation firstThanks to what weâve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldmanâs quote, â...situations you havenât imaginedâ?
Itâs a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.
This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.
Thankfully, there is a lot we can do to provide choice.
Responsible designâThere are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.â
âI Used the Web for a Day on a 50 MB Budgetâ
Chris Ashton
One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that wonât load, but there are ways we can help users use less data or deal with sporadic connectivity.
The srcset
attribute allows the browser to decide which image to serve. This means we can create smaller âcroppedâ images to display on mobile devices in turn using less bandwidth and less data.
<img
src="image-file.jpg"
srcset="large.jpg 1024w,
medium.jpg 640w,
small.jpg 320w"
alt="Image alt text" />
The preload
attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience.
<link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup-->
<link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup-->
Thereâs also native lazy loading, which indicates assets that should only be downloaded when they are needed.
<img src="image.png" loading="lazy" alt="âŠ">
With srcset
, preload
, and lazy loading, we can start to tailor a userâs experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browserâs to make.
So how can we put users in control?
The return of media queriesMedia queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.
Weâve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; itâs less about one-size-fits-all and more about serving adaptable content.
As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.
For example, thereâs a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.
@media (light-level: normal) {
--background-color: #fff;
--text-color: #0b0c0c;
}
@media (light-level: dim) {
--background-color: #efd226;
--text-color: #0b0c0c;
}
Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data
, prefers-color-scheme
, and prefers-reduced-motion
, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people donât have to spend time making each site they visit more usable.
Media queries like this go beyond choices made by a browser to grant more control to the user.
Expect the unexpectedIn the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.
We canât design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products.
A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.
When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.
Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.
Asynchronous Design Critique: Getting Feedback
âAny comment?â is probably one of the worst ways to ask for feedback. Itâs vague and open ended, and it doesnât provide any indication of what weâre looking for. Getting good feedback starts earlier than we might expect: it starts with the request.
It might seem counterintuitive to start the process of receiving feedback with a question, but that makes sense if we realize that getting feedback can be thought of as a form of design research. In the same way that we wouldnât do any research without the right questions to get the insights that we need, the best way to ask for feedback is also to craft sharp questions.
Design critique is not a one-shot process. Sure, any good feedback workflow continues until the project is finished, but this is particularly true for design because design work continues iteration after iteration, from a high level to the finest details. Each level needs its own set of questions.
And finally, as with any good research, we need to review what we got back, get to the core of its insights, and take action. Question, iteration, and review. Letâs look at each of those.
The questionBeing open to feedback is essential, but we need to be precise about what weâre looking for. Just saying âAny comment?â, âWhat do you think?â, or âIâd love to get your opinionâ at the end of a presentationâwhether itâs in person, over video, or through a written postâis likely to get a number of varied opinions or, even worse, get everyone to follow the direction of the first person who speaks up. And then... we get frustrated because vague questions like those can turn a high-level flows review into people instead commenting on the borders of buttons. Which might be a hearty topic, so it might be hard at that point to redirect the team to the subject that you had wanted to focus on.
But how do we get into this situation? Itâs a mix of factors. One is that we donât usually consider asking as a part of the feedback process. Another is how natural it is to just leave the question implied, expecting the others to be on the same page. Another is that in nonprofessional discussions, thereâs often no need to be that precise. In short, we tend to underestimate the importance of the questions, so we donât work on improving them.
The act of asking good questions guides and focuses the critique. Itâs also a form of consent: it makes it clear that youâre open to comments and what kind of comments youâd like to get. It puts people in the right mental state, especially in situations when they werenât expecting to give feedback.
There isnât a single best way to ask for feedback. It just needs to be specific, and specificity can take many shapes. A model for design critique that Iâve found particularly useful in my coaching is the one of stage versus depth.
âStageâ refers to each of the steps of the processâin our case, the design process. In progressing from user research to the final design, the kind of feedback evolves. But within a single step, one might still review whether some assumptions are correct and whether thereâs been a proper translation of the amassed feedback into updated designs as the project has evolved. A starting point for potential questions could derive from the layers of user experience. What do you want to know: Project objectives? User needs? Functionality? Content? Interaction design? Information architecture? UI design? Navigation design? Visual design? Branding?
Hereâre a few example questions that are precise and to the point that refer to different layers:
- Functionality: Is automating account creation desirable?
- Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I mightâve missed.
- Information architecture: We have two competing bits of information on this page. Is the structure effective in communicating them both?
- UI design: What are your thoughts on the error counter at the top of the page that makes sure that you see the next error, even if the error is out of the viewport?
- Navigation design: From research, we identified these second-level navigation items, but once youâre on the page, the list feels too long and hard to navigate. Are there any suggestions to address this?
- Visual design: Are the sticky notifications in the bottom-right corner visible enough?
The other axis of specificity is about how deep youâd like to go on whatâs being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and youâd like a detailed review of that. This can be especially useful from one iteration to the next where itâs important to highlight the parts that have changed.
There are other things that we can consider when we want to achieve more specificâand more effectiveâquestions.
A simple trick is to remove generic qualifiers from your questions like âgood,â âwell,â ânice,â âbad,â âokay,â and âcool.â For example, asking, âWhen the block opens and the buttons appear, is this interaction good?â might look specific, but you can spot the âgoodâ qualifier, and convert it to an even better question: âWhen the block opens and the buttons appear, is it clear what the next action is?â
Sometimes we actually do want broad feedback. Thatâs rare, but it can happen. In that sense, you might still make it explicit that youâre looking for a wide range of opinions, whether at a high level or with details. Or maybe just say, âAt first glance, what do you think?â so that itâs clear that what youâre asking is open ended but focused on someoneâs impression after their first five seconds of looking at it.
Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these situations, it might be useful to explicitly say that some parts are already locked in and arenât open to feedback. Itâs not something that Iâd recommend in general, but Iâve found it useful to avoid falling again into rabbit holes of the sort that might lead to further refinement but arenât whatâs most important right now.
Asking specific questions can completely change the quality of the feedback that you receive. People with less refined critique skills will now be able to offer more actionable feedback, and even expert designers will welcome the clarity and efficiency that comes from focusing only on whatâs needed. It can save a lot of time and frustration.
The iterationDesign iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Yet a lot of design tools with inline commenting tend to show changes as a single fluid stream in the same file, and those types of design tools make conversations disappear once theyâre resolved, update shared UI components automatically, and compel designs to always show the latest versionâunless these would-be helpful features were to be manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. Thatâs probably not the best way to approach design critiques, but even if I donât want to be too prescriptive here: that could work for some teams.
The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. Iâm going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this structure can use this. By the way, when I refer to a âwrite-up or presentation,â Iâm including video recordings or other media too: as long as itâs asynchronous, it works.
Using iteration posts has many advantages:
- It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
- It makes decisions visible for future review, and conversations are likewise always available.
- It creates a record of how the design changed over time.
- Depending on the tool, it might also make it easier to collect feedback and act on it.
These posts of course donât mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And other feedback approaches (such as live critique, pair designing, or inline comments) can build from there.
I donât think thereâs a standard format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:
- The goal
- The design
- The list of changes
- The questions
Each project is likely to have a goal, and hopefully itâs something thatâs already been summarized in a single sentence somewhere else, such as the client brief, the product managerâs outline, or the project ownerâs request. So this is something that Iâd repeat in every iteration postâliterally copy and pasting it. The idea is to provide context and to repeat whatâs essential to make each iteration post complete so that thereâs no need to find information spread across multiple posts. If I want to know about the latest design, the latest iteration post will have all that I need.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. So having posts that repeat information is actually very effective toward making sure that everyone is on the same page.
The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work thatâs been done. In short, itâs any design artifact. For the final stages of work, I prefer the term blueprint to emphasize that Iâll be showing full flows instead of individual screens to make it easier to understand the bigger picture.
It can also be useful to label the artifacts with clear titles because that can make it easier to refer to them. Write the post in a way that helps people understand the work. Itâs not too different from organizing a good live presentation.
For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on whatâs new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.
And finally, as noted earlier, itâs essential that you include a list of the questions to drive the design critique in the direction you want. Doing this as a numbered list can also help make it easier to refer to each question by its number.
Not all iterations are the same. Earlier iterations donât need to be as tightly focusedâthey can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see whatâs possible. Then later, the iterations start settling on a solution and refining it until the design process reaches its end and the feature ships.
I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draftâjust a concept to get a conversation goingâor it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.
Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. This might look like a minor labelling tip, but it can help in multiple ways:
- UniqueâItâs a clear unique marker. Within each project, one can easily say, âThis was discussed in i4,â and everyone knows where they can go to review things.
- UnassumingâIt works like versions (such as v1, v2, and v3) but in contrast, versions create the impression of something thatâs big, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
- Future proofâIt resolves the âfinalâ naming problem that you can run into with versions. No more files named âfinal final complete no-really-its-done.â Within each project, the largest number always represents the latest iteration.
To mark when a design is complete enough to be worked on, even if there might be some bits still in need of attention and in turn more iterations needed, the wording release candidate (RC) could be used to describe it: âwith i8, we reached RCâ or âi12 is an RC.â
The reviewWhat usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This approach is particularly effective during live, synchronous feedback. But when we work asynchronously, itâs more effective to use a different approach: we can shift to a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
This shift has some major benefits that make asynchronous feedback particularly effective, especially around these friction points:
- It removes the pressure to reply to everyone.
- It reduces the frustration from swoop-by comments.
- It lessens our personal stake.
The first friction point is feeling a pressure to reply to every single comment. Sometimes we write the iteration post, and we get replies from our team. Itâs just a few of them, itâs easy, and it doesnât feel like a problem. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the person whoâs replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We need to accept that this pressure is absolutely normal, and itâs human nature to try to accommodate people who we care about. Sometimes replying to all comments can be effective, but if we treat a design critique more like user research, we realize that we donât have to reply to every comment, and in asynchronous spaces, there are alternatives:
- One is to let the next iteration speak for itself. When the design evolves and we post a follow-up iteration, thatâs the reply. You might tag all the people who were involved in the previous discussion, but even thatâs a choice, not a requirement.
- Another is to briefly reply to acknowledge each comment, such as âUnderstood. Thank you,â âGood pointsâIâll review,â or âThanks. Iâll include these in the next iteration.â In some cases, this could also be just a single top-level comment along the lines of âThanks for all the feedback everyoneâthe next iteration is coming soon!â
- Another is to provide a quick summary of the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.
The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirementsâor of the previous iterationsâ discussions. On their side, thereâs something that one can hope that they might learn: they could start to acknowledge that theyâre doing this and they could be more conscious in outlining where theyâre coming from. Swoop-by comments often trigger the simple thought âWeâve already discussed thisâŠâ, and it can be frustrating to have to repeat the same reply over and over.
Letâs begin by acknowledging again that thereâs no need to reply to every comment. If, however, replying to a previously litigated point might be useful, a short reply with a link to the previous discussion for extra details is usually enough. Remember, alignment comes from repetition, so itâs okay to repeat things sometimes!
Swoop-by commenting can still be useful for two reasons: they might point out something that still isnât clear, and they also have the potential to stand in for the point of view of a user whoâs seeing the design for the first time. Sure, youâll still be frustrated, but that might at least help in dealing with it.
The third friction point is the personal stake we could have with the design, which could make us feel defensive if the review were to feel more like a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego (because yes, even if we donât want to admit it, itâs there). And ultimately, treating everything in aggregated form allows us to better prioritize our work.
Always remember that while you need to listen to stakeholders, project owners, and specific advice, you donât have to accept every piece of feedback. You have to analyze it and make a decision that you can justify, but sometimes ânoâ is the right answer.
As the designer leading the project, youâre in charge of that decision. Ultimately, everyone has their specialty, and as the designer, youâre the one who has the most knowledge and the most context to make the right decision. And by listening to the feedback that youâve received, youâre making sure that itâs also the best and most balanced decision.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.