GuardAI – AI Risk Management Framework

Contributed to the redesign of GuardAI, a risk management platform aimed at helping organizations safeguard their generative AI services against misuse and align with evolving US/EU compliance standards.

Role
UI/UX Designer

Duration
3 Months

Tools
Figma

Company
Himalaya Quantitative Solutions

About

GuardAI is an AI risk management framework developed by Himalaya Quantitative Solutions. It protects generative AI systems from harmful or malicious usage, helping developers and businesses comply with evolving US and EU regulations.

As a volunteer UI/UX designer, I redesigned GuardAI’s public-facing interface to improve product clarity, usability, and visual credibility—directly supporting client acquisition and product demos.

Challenge

The original GuardAI interface lacked credibility and structure, making it difficult for users and potential business clients to understand its capabilities. This posed a barrier for the team’s goal: showcasing the platform’s ability to prevent misuse of GenAI (e.g., jailbreak prompts, harmful content) and help enterprise clients meet regulatory standards.

Contributions

  • Redesigned the GuardAI website to improve structure, clarity, and professionalism, with emphasis on compliance, use cases, and AI safety messaging.

  • Improved visual hierarchy and navigation to better communicate product value (e.g., Defense, Risk Metrics, Use Cases).

  • Collaborated closely with engineers to align on feasibility and implementation timelines.

  • Enhanced product demos used in business presentations, helping improve engagement with potential clients.

Homepage
Redesign: Establishing Trust from the Start

To help users and stakeholders understand the vision behind GuardAI, I designed an About page that:

  • Establishes the problem space: security vulnerabilities in AI

  • Explains how GuardAI protects against malicious misuse

  • Communicates the ethical responsibility of building secure models

About: Framing
the Why

To help users and stakeholders understand the vision behind GuardAI, I designed an About page that:

  • Establishes the problem space: security vulnerabilities in AI

  • Explains how GuardAI protects against malicious misuse

  • Communicates the ethical responsibility of building secure models

Case Study: Transparency in Action

We created a public-facing case study to share real use cases and evaluation outcomes. It allows users to:

  • View examples of prompt injections and data leaks

  • Understand how defenses perform in different scenarios

  • Build credibility and share research with others

Vulnerability Detection &
Defense

⚠️ Harmful Prompt Tab: Exposing Prompt Injection Risks

This tab allows users to test how easily models can be manipulated.

  • Input: Email, model, original prompt, # of attack variations

  • Defense dropdown: Choose mitigation method

  • Attack preview: See how attackers modified the original prompt

  • Real-time feedback: Did the model follow the attack prompt?

🔒 Privacy Tab: Preventing Data Leakage

Tests if the model can reveal sensitive information it's exposed to.

  • Inputs: Email, model, prompt

  • File reference: Simulates model access to private files

  • Output reveals if sensitive data was unintentionally surfaced

🟰 Bias Tab: Detecting Fairness and Equity Issues

This tab allows users to test model outputs across identity-based prompts.

  • Side-by-side comparisons with demographic variations

  • Output is scored for inconsistency or skew

  • Designed to flag ethical concerns in hiring, healthcare, etc.

Take a Look to Learn More