AI/LLM

Deploying a UX Writing LLM in Figma

Scope:

Content quality, design tooling, AI-assisted workflows, coding

Team:

AI Deployment Team (DTXD)

Compnay:

ServiceNow

Timeline:

4 weeks

ServiceNow is a cloud-based software company specializing in enterprise-level IT service management and AI-driven workflow automation. It allows businesses to digitize and automate workflows across IT, HR, customer service, and security operations.

Problem :

PMs and designers struggled to consistently reference content rules while designing. Content decisions were often revisited late in the process, leading to repeated rework across design and engineering and unnecessary consumption of billable hours.

Content quality was continuously breaking down during execution because:


  • Designers lacked fast, in-context ways to validate content decisions

  • Content rules were inconsistently applied across titles, body copy, and CTAs

  • Engineers frequently reworked shipped designs due to late-stage content issues

  • Teams spent significant time debating and correcting content rather than building

Goal :

Reduce the cost of content rework by catching UX writing issues before designs are shipped. Deliver final Content guidelines for LLM in JSON.

Diagnosis :

Through interviews with designers and PMs, I identified a clear pattern:
the majority of time spent writing, discussing, and reworking content clustered around a small set of elements:

  • Titles / headers

  • Subtitles and body copy

  • CTAs and hyperlinked text

Billable hours spent:

  • Designers spent ~6 hours per project writing and correcting content.

  • Engineers also spent measurable time reworking content post-handoff due to clarity, consistency, or standards issues.

Hypothesis :

If we prioritized detecting and correcting issues in the content types that teams spent the most time reworking, we could significantly reduce both time and cost.

Solution :

I proposed a scoped MVP of deliverables focused on a high-impact set of content types:

  • Titles / headers

  • Subtitles

  • Body copy

  • CTAs and hyperlinked text

By intentionally limiting scope, we could:

  • Test and iterate on content rules in manageable batches

  • Improve the model incrementally

  • Avoid setting unrealistic expectations for accuracy or coverage in early versions

Partners :

  • A UX designer

  • UX Design Systems

  • An engineer

Data :

The primary data source for this project was qualitative user feedback from PMs and designers, supplemented by:

  • Existing content style guide rules

  • Observed rework patterns during design and engineering handoff

Process :

Interviewed designers to understand how much time they spent writing and revising content per project.

  1. Collaborated with design partners to define MVP scope and success criteria.

  2. Used the content style guide as the baseline knowledge source for the LLM.

  3. Engineered prompts that clearly defined:

    • The tool’s persona and role

    • What the plugin should evaluate

    • What it explicitly should not do

This ensured predictable, trustworthy feedback while keeping the system lightweight.

Outcome :

The MVP established a foundation for catching high-cost content issues earlier in the design process.


This made the job of aligning design teams around shared standards simpler without requiring manual cross-referencing or late-stage reviews.

Project Issues

Problem: Typography rules without application guidance


As a result, designers were applying typography based on visual preference rather than intent or consistency. Headers and body styles were used interchangeably across screens, which created two compounding problems:


  1. I couldn’t confidently define examples of good vs. bad content or set reliable character-count guidance, because the same style meant different things in different contexts.

  2. Our rule-based LLM logic misread the UI. Since it relied on design-system semantics to interpret content hierarchy, inconsistent usage caused the plugin to misclassify content and return inaccurate feedback.

Solutions Considered

We identified three possible ways forward:


  1. Educate users upfront
    Communicate on the intro screen that the plugin evaluates designs based on the design system’s rules—and that inconsistent typography usage would limit the accuracy of feedback.

  2. Close the system gap
    Partner with the design systems team to define clear application guidelines for header, body, and CTA typography, establishing shared semantics across the product.

Reduce tool intelligence Constrain the LLM to basic checks only, avoiding misinterpretation—but at the cost of significantly reducing the tool’s value.

Final Decision

We chose to execute solutions 1 and 2. I also delivered the final Content guidelines for the LLM in JSON.

DELIVERING CONTENT IN JSON

More Projects

AI/LLM

Deploying a UX Writing LLM in Figma

Scope:

Content quality, design tooling, AI-assisted workflows, coding

Team:

AI Deployment Team (DTXD)

Compnay:

ServiceNow

Timeline:

4 weeks

ServiceNow is a cloud-based software company specializing in enterprise-level IT service management and AI-driven workflow automation. It allows businesses to digitize and automate workflows across IT, HR, customer service, and security operations.

Problem :

PMs and designers struggled to consistently reference content rules while designing. Content decisions were often revisited late in the process, leading to repeated rework across design and engineering and unnecessary consumption of billable hours.

Content quality was continuously breaking down during execution because:


  • Designers lacked fast, in-context ways to validate content decisions

  • Content rules were inconsistently applied across titles, body copy, and CTAs

  • Engineers frequently reworked shipped designs due to late-stage content issues

  • Teams spent significant time debating and correcting content rather than building

Goal :

Reduce the cost of content rework by catching UX writing issues before designs are shipped. Deliver final Content guidelines for LLM in JSON.

Diagnosis :

Through interviews with designers and PMs, I identified a clear pattern:
the majority of time spent writing, discussing, and reworking content clustered around a small set of elements:

  • Titles / headers

  • Subtitles and body copy

  • CTAs and hyperlinked text

Billable hours spent:

  • Designers spent ~6 hours per project writing and correcting content.

  • Engineers also spent measurable time reworking content post-handoff due to clarity, consistency, or standards issues.

Hypothesis :

If we prioritized detecting and correcting issues in the content types that teams spent the most time reworking, we could significantly reduce both time and cost.

Solution :

I proposed a scoped MVP of deliverables focused on a high-impact set of content types:

  • Titles / headers

  • Subtitles

  • Body copy

  • CTAs and hyperlinked text

By intentionally limiting scope, we could:

  • Test and iterate on content rules in manageable batches

  • Improve the model incrementally

  • Avoid setting unrealistic expectations for accuracy or coverage in early versions

Partners :

  • A UX designer

  • UX Design Systems

  • An engineer

Data :

The primary data source for this project was qualitative user feedback from PMs and designers, supplemented by:

  • Existing content style guide rules

  • Observed rework patterns during design and engineering handoff

Process :

Interviewed designers to understand how much time they spent writing and revising content per project.

  1. Collaborated with design partners to define MVP scope and success criteria.

  2. Used the content style guide as the baseline knowledge source for the LLM.

  3. Engineered prompts that clearly defined:

    • The tool’s persona and role

    • What the plugin should evaluate

    • What it explicitly should not do

This ensured predictable, trustworthy feedback while keeping the system lightweight.

Outcome :

The MVP established a foundation for catching high-cost content issues earlier in the design process.


This made the job of aligning design teams around shared standards simpler without requiring manual cross-referencing or late-stage reviews.

Project Issues

Problem: Typography rules without application guidance


As a result, designers were applying typography based on visual preference rather than intent or consistency. Headers and body styles were used interchangeably across screens, which created two compounding problems:


  1. I couldn’t confidently define examples of good vs. bad content or set reliable character-count guidance, because the same style meant different things in different contexts.

  2. Our rule-based LLM logic misread the UI. Since it relied on design-system semantics to interpret content hierarchy, inconsistent usage caused the plugin to misclassify content and return inaccurate feedback.

Solutions Considered

We identified three possible ways forward:


  1. Educate users upfront
    Communicate on the intro screen that the plugin evaluates designs based on the design system’s rules—and that inconsistent typography usage would limit the accuracy of feedback.

  2. Close the system gap
    Partner with the design systems team to define clear application guidelines for header, body, and CTA typography, establishing shared semantics across the product.

Reduce tool intelligence Constrain the LLM to basic checks only, avoiding misinterpretation—but at the cost of significantly reducing the tool’s value.

Final Decision

We chose to execute solutions 1 and 2. I also delivered the final Content guidelines for the LLM in JSON.

DELIVERING CONTENT IN JSON

More Projects

AI/LLM

Deploying a UX Writing LLM in Figma

Scope:

Content quality, design tooling, AI-assisted workflows, coding

Team:

AI Deployment Team (DTXD)

Compnay:

ServiceNow

Timeline:

4 weeks

ServiceNow is a cloud-based software company specializing in enterprise-level IT service management and AI-driven workflow automation. It allows businesses to digitize and automate workflows across IT, HR, customer service, and security operations.

Problem :

PMs and designers struggled to consistently reference content rules while designing. Content decisions were often revisited late in the process, leading to repeated rework across design and engineering and unnecessary consumption of billable hours.

Content quality was continuously breaking down during execution because:


  • Designers lacked fast, in-context ways to validate content decisions

  • Content rules were inconsistently applied across titles, body copy, and CTAs

  • Engineers frequently reworked shipped designs due to late-stage content issues

  • Teams spent significant time debating and correcting content rather than building

Goal :

Reduce the cost of content rework by catching UX writing issues before designs are shipped. Deliver final Content guidelines for LLM in JSON.

Diagnosis :

Through interviews with designers and PMs, I identified a clear pattern:
the majority of time spent writing, discussing, and reworking content clustered around a small set of elements:

  • Titles / headers

  • Subtitles and body copy

  • CTAs and hyperlinked text

Billable hours spent:

  • Designers spent ~6 hours per project writing and correcting content.

  • Engineers also spent measurable time reworking content post-handoff due to clarity, consistency, or standards issues.

Hypothesis :

If we prioritized detecting and correcting issues in the content types that teams spent the most time reworking, we could significantly reduce both time and cost.

Solution :

I proposed a scoped MVP of deliverables focused on a high-impact set of content types:

  • Titles / headers

  • Subtitles

  • Body copy

  • CTAs and hyperlinked text

By intentionally limiting scope, we could:

  • Test and iterate on content rules in manageable batches

  • Improve the model incrementally

  • Avoid setting unrealistic expectations for accuracy or coverage in early versions

Partners :

  • A UX designer

  • UX Design Systems

  • An engineer

Data :

The primary data source for this project was qualitative user feedback from PMs and designers, supplemented by:

  • Existing content style guide rules

  • Observed rework patterns during design and engineering handoff

Process :

Interviewed designers to understand how much time they spent writing and revising content per project.

  1. Collaborated with design partners to define MVP scope and success criteria.

  2. Used the content style guide as the baseline knowledge source for the LLM.

  3. Engineered prompts that clearly defined:

    • The tool’s persona and role

    • What the plugin should evaluate

    • What it explicitly should not do

This ensured predictable, trustworthy feedback while keeping the system lightweight.

Outcome :

The MVP established a foundation for catching high-cost content issues earlier in the design process.


This made the job of aligning design teams around shared standards simpler without requiring manual cross-referencing or late-stage reviews.

Project Issues

Problem: Typography rules without application guidance


As a result, designers were applying typography based on visual preference rather than intent or consistency. Headers and body styles were used interchangeably across screens, which created two compounding problems:


  1. I couldn’t confidently define examples of good vs. bad content or set reliable character-count guidance, because the same style meant different things in different contexts.

  2. Our rule-based LLM logic misread the UI. Since it relied on design-system semantics to interpret content hierarchy, inconsistent usage caused the plugin to misclassify content and return inaccurate feedback.

Solutions Considered

We identified three possible ways forward:


  1. Educate users upfront
    Communicate on the intro screen that the plugin evaluates designs based on the design system’s rules—and that inconsistent typography usage would limit the accuracy of feedback.

  2. Close the system gap
    Partner with the design systems team to define clear application guidelines for header, body, and CTA typography, establishing shared semantics across the product.

Reduce tool intelligence Constrain the LLM to basic checks only, avoiding misinterpretation—but at the cost of significantly reducing the tool’s value.

Final Decision

We chose to execute solutions 1 and 2. I also delivered the final Content guidelines for the LLM in JSON.

DELIVERING CONTENT IN JSON

More Projects

Create a free website with Framer, the website builder loved by startups, designers and agencies.