Desktop UX for AI Inputs in a Typeface Design App —

Info

  • Desktop App
  • 14 Weeks
  • May - August 2023
  • Solo Project

Tools

  • Figma
  • RunwayML

Overview

FontFacing is a desktop app concept for typography design driven by AI. It will analyze your digital design/inputs and create a training model as you work. When you make changes to the path of one letterform FontFacing will analyze the change and reproduce it, if necessary, to the rest of the typeface. There exist multiple features within the interface to provide feedback about when, where, why, and how the AI is working so that designers can utilize them to their full potential. FontFacing represents the possibility of AI to vastly streamline the processes of repetitive tasks without taking away human agency or control.

After completing the project, OpenAI notably added a new feature to ChatGPT which validates, in part, the insights and direction of this project. This feature is a feedback system which is part of what I explored in FontFacing. For more details, see below.

Problem

  • Designing all of the glyphs of a typeface is a laborious and repetitive process that can take several weeks or even months to complete.
  • Ensuring consistency across letterforms is a time-intensive process and, through development, there are limited opportunities to experiment, iterate, and refine possibilities.
  • Current integration of AI features into existing design tools is limited, at best, and intelligent refinement has to be done manually.

Goals

  • Reduce the manual workload of typeface design by utilizing the strengths of AI and machine learning's pattern recognition.
  • Utilize AI to maintain stylistic rules across characters to reduce repetitive adjustments.
  • Provide designers clear modes and controls so they can influence or override AI decisions whenever necessary.

Solution

  • Introduce a mode-based workflow with selections like Live Generation, Details, and more to toggle between AI-driven automation and hands-on manual refinement.
  • Provide feedback within the interface so designers can see how the AI interprets their inputs, understand changes made by AI generation, and confirm or deny AI suggestions.
  • Allow designers to input through flexible methods to guide the AI, ensuring the outcome aligns closely with their vision.

The problem of designing a typeface

Designing an entire, cohesive typeface is a demanding, time-intensive task that often requires weeks or months of meticulous work. Each glyph must harmonize with the entire character set, making it difficult for designers to maintain visual consistency and experiment freely. Without intelligent support, the complexity of the process stifles creative exploration and increases the risk of creative burnout.

A mockup of a desktop app meant for typeface design, showing a working canvas with one glyph, toolbars, live view of the entire typeface, and an analysis window showing the training model as it analyzes inputs in real time.

How can we help type designers create a complete, consistent typeface in less time without losing their creative vision?

Highlights

One detail of the app interface which shows a series of buttons (Sketch, Live Generation, Focus, Letterspacing) which users can use to move between working stages

A tab bar to select process stage

Since the typeface design process works in well defined stages, albeit with some back and forth, this tab bar allows the designer to indicate where in the process they are and to see when the native AI platform will be actively learning versus actively generating or adjusting.

One detail of the app interface which shows a window within the sidebar, Analysis, with some realtime AI analysis

Show how the platform analyzes

A specific window displays how the AI is processing input and also gives designers the opportunity to for manual input within that process.

One detail of the app interface which shows the canvas with a glyph being edited; below the canvas is a toolbar with options that are specific to what the user's currently doing on the canvas; this canvas has a completed letter but nothing's selected, so the context bar has a text input with the label text reading, `Describe what you'd like to generate or change about this letterform`, along with a `Generate` button and another button for additional options

Context Bar

As is being used more and more often in design programs, a contextual toolbar moves around the interface depending on what you’re interacting with and gives you context-specific options that may be particularly relevant. This allows AI features to be front-and-center without obstructing the workflow.

1/6

Foundational Research

First Training Models

I made my first training model from an assortment of photos of beer. The results are a little odd, with some images looking okay at a cursory glance but more noticeable errors the closer you look.

I made a second training model using a subset of the beer photos selected for their uniformity (mostly similar photos of different, singular 16-ounce cans of beer on a white table with a white background). The depiction of beer cans is fantastic, but any other aspect of the photos, like if I asked for a background other than white, would be extremely abstracted.

An AI-generated image of 2 glasses of beer on a misshapen table with a completely inaccurate hand. An AI-generated image of 2 beer cans on a white table with a white background.

Type-Specific Training Model

I realized pretty quickly that I wanted to use this project to work with vector graphics, so I took the opportunity to make a training model to test more basic shape creation. Since RunwayML’s resources are limited to more straightforward image/video tools, rather than vector graphics, I created 30 jpgs showing a singular, capital, letter ‘A’ in different typefaces. The resulting output recognizes the shapes and patterns, but not that they’re letters.

Testing Existing AI-Generation

As I narrowed down my concept to working with SVGs, I wanted to check on current abilities to create them. I checked a few different existing AI platforms to see how they responded. To the left is an SVG made by asking ChatGPT to make the XML code for a capital letter ‘A’ while on the right is one created by Adobe Illustrator’s beta AI tools.

An AI-generated image of 2 beer cans on a white table with a white background.

Left: vector made via XML from ChatGPT; Right: generated by Adobe Illutrator’s beta AI.

Existing Research in this Space

I did an extensive search to see if this was already being worked on and, as expected, there are various image generation and SVG tools already. Since I’d been testing with basic letterforms, I searched for typeface generators and none existed at the time of this project. There were a lot of copy/text generators, but none for typeface design. In my search, I read through the work of Erik Bern, Måns Grebäck, and Jean Böhm, all of whom had conducted research and experiments into the capabilities of AI to generate letterforms.

An animated gif of the alphabet and numbers as they morph, matching, through a variety of visual styles. A vast array of different AI-generated versions of a lowercase letter `a`, all appearing along the spectrum from `perfect` to `odd` or `unrecognizable`.

Left: experiments from Erik Bern to generate whole typefaces; Right: in-depth and extensive experiment from Jean Böhm to generate letterforms as actual vector graphics.

A series of typecase pairs with the uppercase of a font next to a lowercase generated by AI, and vice versa.

Above: Experiments from Måns Grebäck to see the abilities of AI to generate the lowercase of a typeface if given the uppercase, and vice versa.

Insights

These are the key learnings I tried to keep in mind as I started to explore low fidelity wireframes and build out my concept.

AI systems often prioritize their capabilities over user interaction, leading to basic input methods such as text boxes. Shifting focus towards empowering users with what they can achieve using AI can enhance the design for more intuitive and impactful interactions.

Every word or pixel included in training data influences the output, especially concerning outliers, as AI lacks the ability to identify outliers without specific training.

The “magic” of AI comes from not knowing how input is being analyzed and processed, which can be confusing for designers. Transparent and understandable AI interactions will help enhance trust and comprehension.

For best results, users must go beyond data selection and make use of ongoing supervised learning and reinforcement of the model (which, for some AI systems, can be impossible for the user). This approach not only refines pattern recognition but also ensures consistent performance.

2/6

Exploring Low Fidelity

Low Fidelity Exploration

A low fidelity wireframe of an AI-generating website, with a text input and a Generate button

Basic text prompt input

A low fidelity wireframe of an AI-generating website, with a file selector and a Generate button

Uploading images as the prompt

A low fidelity wireframe of an AI-generating website, with a text input and a Generate button, as well as a button that says `Start Drawing`

Letting users input text or start with drawing tools

A low fidelity wireframe of an AI-generating website, but instead of a text input there are drawing options, a dropdown selector for typefaces, and the Generate button

Letting users select an existing typeface before drawing

A low fidelity wireframe of an AI-generating website, with a Style selector and a text input for Keywords next to a Generate button

Using keyword selection to make text input easier

These explorations ignore the research

I quickly realized that I wasn’t designing with my insights in mind, but rather letting existing applications decide the direction of my design exploration and user interactions.

After realizing this, I scrapped these ideas and started over by mapping the current workflow of typeface design. This would allow me to consider where AI could support the current process.

3/6

Mapping the Workflow

The Visual Design Aspects are not Linear

The sketching and digitizing stages of typeface design are extensive and may require multiple passes. Depending on the designer, the entire set of characters (lowercase, uppercase, numbers... 26-70ish characters at minimum or in the hundreds depending on the needs of the typeface) may need to be sketched multiple times.

An array of the workflow of typeface design, mapping out between stages, from Sketching to Digitizing to Finishing, as well as the multiple phases within each stage and arrows back and forth to show the non-linear nature.

Repeating Visual Design Elements

Many typeface designers will start the sketching phase by working on control characters, the design of which will help define other characters.

For instance, you can take the pieces of the lowercase ‘n’ to then work out the i, l, h, m, u, r, t, and you’d also have the x-height for other lowercase letters. After working on the lowercase ‘o’ you’d then have the building blocks for b, p, d, q, c, and e. You’d have to work on some of the finer details, but just from those two, the lowercase ‘n’ and ‘o’, you’d have a lot of the major work done for 15 characters, with the pieces necessary to work on many others.

Typefaces in the Context of AI

With the understanding that, at its core, a typeface is a visual pattern which repeats to make up all of the characters of that typeface, we can begin to see how AI and type design can intersect. The consistency with which that pattern is applied to the characters is what defines the visual style (and the feelings it imparts on the reader) of a font.

A training model should be able to figure out many, if not all, of the repeating visual cues from a relatively small sample size. Done correctly, an AI system should therefore be able to speed up the workflow of typeface design.

With the process in mind

what is a typeface?

A consistent collection of design elements that repeat and, as they combine, form the characters/letters that make up written language.

4/6

Back to Designing

Identifying the Concept

Since AI is fantastic at pattern recognition and reproduction, and the typeface design process is a back and forth dance of refining the consistency and visual design of the elements which will invariably end up repeated throughout a typeface, it was an almost natural leap to the refined concept behind the application: the AI can analyze the pattern of your design, while you design, and use that pattern to build out and make alterations to the rest of the character set.

More Detailed Iteration

Moving towards medium fidelity, I introduced a sidebar which gives a preview into how the AI is interpreting and processing data. It also allows designers to have manual input via supervised learning, allowing or disallowing interpretations of the data.

A medium fidelity wireframe of the app, the toolbar is at the top of the work area, the work area itself is an infinite space with a paper texture which is currently zoomed into one letter, there's a context bar below the current letter, and there's a sidebar featuring an active analysis window and a live view of the entire working font.

Existing Typeface Design Platforms

I looked at the interfaces of existing typeface design platforms, and design apps in general, to get a better sense of what designers are used to designing with. This would enable my design to be easily adopted.

A screenshot of FontLab showing a lowercase `a` being edited; along with the toolbar and sidebar, as well as the vector in the act of being edited, the screenshot also shows FontLab's ability to edit various font weights in the same frame by layering the different outlines on top of each other.

Making fine adjustments between font styles of a typeface using FontLab.

Moving from a Browser to the Desktop

Since my concept has evolved with the target audience becoming designers, at this stage it’s necessary to iterate towards a desktop application instead of a webapp.

This iteration also features the introduction of the “Stage” slider to tell the platform which stage of the process you’re in.

A medium fidelity wireframe of the app, the toolbar is at the top of the work area, the work area itself is an infinite space with a paper texture which is currently zoomed into one letter, there's a context bar below the current letter, and there's a sidebar featuring an active analysis window and a live view of the entire working font.

Starting Pop-Up

In an effort to highlight that there are AI features, I experimented with the idea of a pop-up modal which would appear whenever starting a new file.

Unfortunately, this precludes the idea of designers having differing workflows and forces them, potentially, into thinking they must select one particular way to work.

Although the modal wouldn’t end up staying, the use of color to indicate AI features ended up being a great highlight.

An AI-generated image of 2 glasses of beer on a misshapen table with a completely inaccurate hand. An AI-generated image of 2 beer cans on a white table with a white background.

5/6

Prototyping

Rethinking the Workflow

This platform should be seeking to empower the designer, however they want to work. To obstruct their design process with a pop-up goes against that. I mapped out a more circular workflow for the design process with AI seamlessly integrated into it.

This workflow assumes that the designer is going to be moving back and forth and, therefore, should have access to the AI features at all times, for whenever they might need them.

A map of the ideal workflow for the platform, showing a kind of circular path where the user can go back and forth between different tools and the AI tools are meant to fit seamlessly into the different steps.

Exploration & Iteration

Exploring Toolbars

To be able to access all of the AI features at any time, I explored a variety of layouts and placement for them. This included iteration on the various toolbars.

Context Bar

To further empower designer workflows, I explored some of the functionality behind the contextual toolbar.

A medium fidelity prototype combining the previously explored elements with the current concept; the sidebar has been simplified and the infinite canvas has been replaced with an artboard system to focus on singular glyphs while working; the toolbar has also been simplified with some tools removed and AI tools moved to an Options Bar at the top of the app.

Putting the Elements Together

This version changes the previously “infinite” canvas, with the whole character set next to each other, into more of an artboard or frame where you can work on one character at a time, more familiar to the graphic designers who use Adobe products. The AI also has a permanent home in the Options Bar and there are additional elements to access the rest of the character set and working layers of the currently selected character.

Visual Design Elements

I conducted a brief exercise to figure out the visual design language I wanted to carry forward for the next revisions.

Low-fidelity wireframe of GPS directions
Low-fidelity wireframe of GPS directions with a pop-up notification which says `Handoff: moving your music and navigation to your car.`
From the previous pop-up notification, this low-fidelity screen shows the notification expanded with 2 options to select: cancel or okay

Finalized Layout

This exploration represents what would become the finalized layout of the interface. The previous buttons to access Layers and Character Set have become a separate sidebar on the left of the interface. I tried to depart from the primarily gray color palettes of most design platforms, but it’s overbearing here. There's too much color.

A high fidelity wireframe with a more colorful interface which has replaced the predominantly gray interface; some of the buttons have been moved into a second sidebar so that the analysis and navigation are on one side and the glyph selection and layers are on the opposite side. There's too much color.

Finalized Interface

I pared back the use of color from the previous version and explored more ways to use the blue and purple to highlight the AI features.

A high fidelity wireframe building off the previous version; most of the color has been replaced with the shades of gray and there's a selective use of violet and blue (and the gradient between them) to highlight the AI tools and related features (like AI history and the Analysis window).

6/6

Project Validation

After working on this project and developing the AI-related features, some interesting updates came to ChatGPT which helped to validate that, at least in part, I had correctly pursued aspects of providing feedback to the use of AI features.

ChatGPT-4

This is how ChatGPT worked at the time I conceived of FontFacing. You enter a prompt then see a small dot, amounting to a "Loading" animation, then get your generated response.

Insights for FontFacing

This is a concept I'd come up with for FontFacing, specifically as a solution to a particular insight revolving around a startling lack of feedback and transparency in existing AI tools which often make AI seem like magical interactions. They also make it difficult to refine or iterate your prompts/interactions because there's no way to see how the previous interaction was analyzed.

One detail of the app interface which shows a window within the sidebar, Analysis, with some realtime AI analysis.

ChatGPT-o1 Preview

As if to validate some of my conceptual work, late in 2024 OpenAI released a preview of their newest model, ChatGPT-o1. After entering a prompt, there's a flash of feedback which relates to parts of the prompt as if to say, "This is what you asked for, let me think about it."

Introducing

A mockup of a desktop app meant for typeface design, showing a working canvas with one glyph, toolbars, live view of the entire typeface, and an analysis window showing the training model as it analyzes inputs in real time. A mockup of a MacBook running the desktop app, showing a working canvas with one glyph, toolbars, live view of the entire typeface, and an analysis window showing the training model as it analyzes inputs in real time. A mockup of a MacBook running the desktop app, but this time the canvas is blank so there's no contextual menu and the analysis window is also blank.

Final Thoughts & Next Steps

As developed, the concept itself is structurally sound and I firmly believe that it could successfully empower designers. There would be a lot of difficulty surrounding the base training sets, since the platform would need extensive training on typefaces to understand the anatomy and character sets. Typefaces are intellectual property and their use for AI training could be contentious.

Given the opportunity, I would love to continue exploring this concept. The opportunities to explore micro-interactions are endless. Some of the areas I’d like to continue iterating on are the Options Bar and the Stage Slider, since there isn’t a clear indication that they’re stages of the design process meant to be moved through towards an end goal. This is also, roughly, the bare minimum the application would require to be considered a serious design application. Professional typeface design has a plethora of functions which aren’t represented here that would need to be worked into the interface in some way. Not everything can be folded into a menu.

A mockup of a MacBook with the Fontfacing app opened to fullscreen to start a new project, with buttons available to start a new file, Open a file, as well as alternate open options, plus a featured banner for a trending font.

Copyright © 2016-2024

Designed in Figma & Coded in Vue.js by Max Wright