FontFacing is a desktop app concept for typography design driven by AI. It will analyze your digital design/inputs and create a training model as you work. When you make changes to the path of one letterform FontFacing will analyze the change and reproduce it, if necessary, to the rest of the typeface. There exist multiple features within the interface to provide feedback about when, where, why, and how the AI is working so that designers can utilize them to their full potential. FontFacing represents the possibility of AI to vastly streamline the processes of repetitive tasks without taking away human agency or control.
After completing the project, OpenAI notably added a new feature to ChatGPT which validates, in part, the insights and direction of this project. This feature is a feedback system which is part of what I explored in FontFacing. For more details, see below.
Designing an entire, cohesive typeface is a demanding, time-intensive task that often requires weeks or months of meticulous work. Each glyph must harmonize with the entire character set, making it difficult for designers to maintain visual consistency and experiment freely. Without intelligent support, the complexity of the process stifles creative exploration and increases the risk of creative burnout.
How can we help type designers create a complete, consistent typeface in less time without losing their creative vision?
I made my first training model from an assortment of photos of beer. The results are a little odd, with some images looking okay at a cursory glance but more noticeable errors the closer you look.
I made a second training model using a subset of the beer photos selected for their uniformity (mostly similar photos of different, singular 16-ounce cans of beer on a white table with a white background). The depiction of beer cans is fantastic, but any other aspect of the photos, like if I asked for a background other than white, would be extremely abstracted.
I realized pretty quickly that I wanted to use this project to work with vector graphics, so I took the opportunity to make a training model to test more basic shape creation. Since RunwayML’s resources are limited to more straightforward image/video tools, rather than vector graphics, I created 30 jpgs showing a singular, capital, letter ‘A’ in different typefaces. The resulting output recognizes the shapes and patterns, but not that they’re letters.
As I narrowed down my concept to working with SVGs, I wanted to check on current abilities to create them. I checked a few different existing AI platforms to see how they responded. To the left is an SVG made by asking ChatGPT to make the XML code for a capital letter ‘A’ while on the right is one created by Adobe Illustrator’s beta AI tools.
Left: vector made via XML from ChatGPT; Right: generated by Adobe Illutrator’s beta AI.
I did an extensive search to see if this was already being worked on and, as expected, there are various image generation and SVG tools already. Since I’d been testing with basic letterforms, I searched for typeface generators and none existed at the time of this project. There were a lot of copy/text generators, but none for typeface design. In my search, I read through the work of Erik Bern, Måns Grebäck, and Jean Böhm, all of whom had conducted research and experiments into the capabilities of AI to generate letterforms.
Left: experiments from Erik Bern to generate whole typefaces; Right: in-depth and extensive experiment from Jean Böhm to generate letterforms as actual vector graphics.
Above: Experiments from Måns Grebäck to see the abilities of AI to generate the lowercase of a typeface if given the uppercase, and vice versa.
These are the key learnings I tried to keep in mind as I started to explore low fidelity wireframes and build out my concept.
AI systems often prioritize their capabilities over user interaction, leading to basic input methods such as text boxes. Shifting focus towards empowering users with what they can achieve using AI can enhance the design for more intuitive and impactful interactions.
Every word or pixel included in training data influences the output, especially concerning outliers, as AI lacks the ability to identify outliers without specific training.
The “magic” of AI comes from not knowing how input is being analyzed and processed, which can be confusing for designers. Transparent and understandable AI interactions will help enhance trust and comprehension.
For best results, users must go beyond data selection and make use of ongoing supervised learning and reinforcement of the model (which, for some AI systems, can be impossible for the user). This approach not only refines pattern recognition but also ensures consistent performance.
I quickly realized that I wasn’t designing with my insights in mind, but rather letting existing applications decide the direction of my design exploration and user interactions.
After realizing this, I scrapped these ideas and started over by mapping the current workflow of typeface design. This would allow me to consider where AI could support the current process.
The sketching and digitizing stages of typeface design are extensive and may require multiple passes. Depending on the designer, the entire set of characters (lowercase, uppercase, numbers... 26-70ish characters at minimum or in the hundreds depending on the needs of the typeface) may need to be sketched multiple times.
Many typeface designers will start the sketching phase by working on control characters, the design of which will help define other characters.
For instance, you can take the pieces of the lowercase ‘n’ to then work out the i, l, h, m, u, r, t, and you’d also have the x-height for other lowercase letters. After working on the lowercase ‘o’ you’d then have the building blocks for b, p, d, q, c, and e. You’d have to work on some of the finer details, but just from those two, the lowercase ‘n’ and ‘o’, you’d have a lot of the major work done for 15 characters, with the pieces necessary to work on many others.
With the understanding that, at its core, a typeface is a visual pattern which repeats to make up all of the characters of that typeface, we can begin to see how AI and type design can intersect. The consistency with which that pattern is applied to the characters is what defines the visual style (and the feelings it imparts on the reader) of a font.
A training model should be able to figure out many, if not all, of the repeating visual cues from a relatively small sample size. Done correctly, an AI system should therefore be able to speed up the workflow of typeface design.
A consistent collection of design elements that repeat and, as they combine, form the characters/letters that make up written language.
Since AI is fantastic at pattern recognition and reproduction, and the typeface design process is a back and forth dance of refining the consistency and visual design of the elements which will invariably end up repeated throughout a typeface, it was an almost natural leap to the refined concept behind the application: the AI can analyze the pattern of your design, while you design, and use that pattern to build out and make alterations to the rest of the character set.
Moving towards medium fidelity, I introduced a sidebar which gives a preview into how the AI is interpreting and processing data. It also allows designers to have manual input via supervised learning, allowing or disallowing interpretations of the data.
I looked at the interfaces of existing typeface design platforms, and design apps in general, to get a better sense of what designers are used to designing with. This would enable my design to be easily adopted.
Making fine adjustments between font styles of a typeface using FontLab.
Since my concept has evolved with the target audience becoming designers, at this stage it’s necessary to iterate towards a desktop application instead of a webapp.
This iteration also features the introduction of the “Stage” slider to tell the platform which stage of the process you’re in.
In an effort to highlight that there are AI features, I experimented with the idea of a pop-up modal which would appear whenever starting a new file.
Unfortunately, this precludes the idea of designers having differing workflows and forces them, potentially, into thinking they must select one particular way to work.
Although the modal wouldn’t end up staying, the use of color to indicate AI features ended up being a great highlight.
This platform should be seeking to empower the designer, however they want to work. To obstruct their design process with a pop-up goes against that. I mapped out a more circular workflow for the design process with AI seamlessly integrated into it.
This workflow assumes that the designer is going to be moving back and forth and, therefore, should have access to the AI features at all times, for whenever they might need them.
I conducted a brief exercise to figure out the visual design language I wanted to carry forward for the next revisions.
This exploration represents what would become the finalized layout of the interface. The previous buttons to access Layers and Character Set have become a separate sidebar on the left of the interface. I tried to depart from the primarily gray color palettes of most design platforms, but it’s overbearing here. There's too much color.
I pared back the use of color from the previous version and explored more ways to use the blue and purple to highlight the AI features.
After working on this project and developing the AI-related features, some interesting updates came to ChatGPT which helped to validate that, at least in part, I had correctly pursued aspects of providing feedback to the use of AI features.
This is how ChatGPT worked at the time I conceived of FontFacing. You enter a prompt then see a small dot, amounting to a "Loading" animation, then get your generated response.
This is a concept I'd come up with for FontFacing, specifically as a solution to a particular insight revolving around a startling lack of feedback and transparency in existing AI tools which often make AI seem like magical interactions. They also make it difficult to refine or iterate your prompts/interactions because there's no way to see how the previous interaction was analyzed.
As if to validate some of my conceptual work, late in 2024 OpenAI released a preview of their newest model, ChatGPT-o1. After entering a prompt, there's a flash of feedback which relates to parts of the prompt as if to say, "This is what you asked for, let me think about it."
As developed, the concept itself is structurally sound and I firmly believe that it could successfully empower designers. There would be a lot of difficulty surrounding the base training sets, since the platform would need extensive training on typefaces to understand the anatomy and character sets. Typefaces are intellectual property and their use for AI training could be contentious.
Given the opportunity, I would love to continue exploring this concept. The opportunities to explore micro-interactions are endless. Some of the areas I’d like to continue iterating on are the Options Bar and the Stage Slider, since there isn’t a clear indication that they’re stages of the design process meant to be moved through towards an end goal. This is also, roughly, the bare minimum the application would require to be considered a serious design application. Professional typeface design has a plethora of functions which aren’t represented here that would need to be worked into the interface in some way. Not everything can be folded into a menu.