The Agreements@State App initiative sought to modernize how internal Department of State customers request, authorize, and operationalize IT services. These services included application development, integration, licensing, and cloud infrastructure support.
Agreement management historically relied on fragmented intake processes, manual drafting workflows, and decentralized document storage. These conditions created operational inefficiencies, increased compliance risk, and reduced transparency for both service providers and customers.
The introduction of Intake@State — an AI-assisted intake platform — created an opportunity to redesign the entire agreement lifecycle as a structured service ecosystem. My work focused on defining the governance architecture, user experience strategy, and workflow models required to support scalable, compliant agreement management across multiple stakeholder groups.
Project: Service Design | App Design
Client: U.S. Department of State Bureau of Diplomatic Technology
Year: February 2025 - September 2025
Role: UX/UI Design Lead
I worked as the UX/UI Design Lead for the Agreements at State application, where my focus was modernizing how technology agreements were requested, reviewed, approved, and fulfilled across Diplomatic Technology teams.
When I joined, the process was heavily manual — agreements moved through email chains, spreadsheets, and disconnected SharePoint folders. My role was to step back and understand the full service lifecycle, not just the interface. I led stakeholder interviews across customer engagement, legal, accounting, product teams, and end users to map the real workflows and identify where delays, rework, and visibility gaps were happening.
From there, I designed the future-state experience that unified intake, approvals, and fulfillment into a single, trackable workflow. A big part of my work involved balancing AI-assisted intake with human engagement, making sure automation improved efficiency without removing critical decision checkpoints.
On the UI side, I designed role-based dashboards, agreement tracking views, and intake experiences that aligned with Power Apps and ServiceNow constraints, ensuring the designs were scalable and realistic to implement in a federal environment.
While the project concluded before full deployment, my work established the experience blueprint, workflow architecture, and interaction standards for how agreements could be processed more efficiently and transparently moving forward.
Technology: Fig Jam | AWS Augmented AI | Azure | Microsoft Power Platform
AWS Augmented AI: Powers the new customer request form (Intake@State), which evaluated incoming service requests using natural language, categorize request types (development, integration, licensing, cloud services, etc.), identifies urgency/service complexity, and recommends potential solution pathways.
Microsoft Power Platform (Power Apps, Power Automate): serves as the experience and workflow orchestration layer of the Agreements application. Power Apps - hosts the core Agreements application where users drafted, reviewed, and approved agreements.
Power Automate - acts as the workflow conductor, managing how agreements moved between stakeholders.
SharePoint Online: Served as the agreements storage and version control management.
Microsoft Azure Infrastructure: provided the enterprise-grade infrastructure supporting application hosting, authentication, and data security.
Azure - acted as an active directory.
Figma / FigJam (Service Design & Research Artifacts): where the Agreements ecosystem was conceptualized, validated, and communicated.
Prior to Intake@State, service requests were submitted through the Cloud Request Submission Portal (CRSP), email, and ad hoc communication channels. Intake data quality varied widely, requiring manual interpretation and repeated clarification cycles.
Agreement Drafting Was Procedural Rather Than Systematic
Templates were legally compliant but required manual customization and interpretation. Agreement drafting relied heavily on institutional knowledge and individual author experience.
Approval Governance Was Inconsistent
Approval workflows lacked standardized routing logic. Role responsibilities were often ambiguous, increasing approval cycle time and introducing compliance risk.
Decentralized Document Storage
Microsoft Teams functioned as a de facto repository for agreements. Version control and document discoverability were inconsistent, weakening audit readiness and historical traceability.
Designing Agreements at State started with understanding how agreements actually moved through the organization — not how they were documented to work, but how they were experienced by the people doing the work every day.
I focused on breaking down how agreements were initiated, drafted, amended, and approved across multiple stakeholder groups. The goal was to uncover where delays, confusion, rework were happening, where the process was creating friction or uncertainty for both customers and internal teams.
A major part of this effort involved evaluating the experience differences between the legacy CRSP intake process and the emerging AI-assisted Intake@State workflow. I assessed how each approach impacted usability, clarity of requirements, and overall trust in the service. Understanding these differences helped identify where automation could improve efficiency and where human engagement still played a critical role.
Another key area of focus was service clarity and accountability. Agreements often serve as both operational and contractual guardrails, so I examined how clearly responsibilities, expectations, and service outcomes were communicated. The level of clarity directly influenced stakeholder confidence and reduced downstream misunderstandings during fulfillment and service delivery.
Research included perspectives across the full service delivery ecosystem. I worked with business customers requesting IT services, service relationship managers supporting those customers, executive approval authorities responsible for governance decisions, operational users working within governed systems, customer engagement teams coordinating requests, and agreements specialists responsible for drafting and maintaining agreement language.
Capturing insights from each of these perspectives ensured the future-state design reflected the entire lifecycle, not just one stage of the process.
I conducted semi-structured interviews that allowed participants to walk through their real workflows while discussing decision challenges, interpretation gaps, and intake pain points. These sessions typically ran 30 to 45 minutes and were designed to surface both process knowledge and behavioral patterns.
To evaluate experience differences between legacy and future workflows, I facilitated comparative testing where participants completed intake submissions and agreement review tasks across both CRSP and Intake@State environments. This allowed us to directly observe usability differences, confidence levels, and error patterns.
I also facilitated workflow simulation exercises where participants performed drafting and approval activities while thinking aloud. This helped uncover decision-making triggers, risk concerns, and areas where process expectations were unclear or inconsistent across teams.
This research created a shared understanding of where the agreements process was breaking down and established a clear foundation for designing a unified, transparent, and scalable future-state experience. It ensured that improvements were grounded in real operational needs rather than assumptions about how the service should work.
The research revealed that agreements were interpreted differently depending on stakeholder perspective. Legal reviewers focused on liability and contractual clarity, accounting reviewers focused on funding responsibility and compliance, customer engagement teams focused on service feasibility, and customers focused on expected outcomes and deliverables. Because these perspectives were not consistently reconciled during drafting, agreements often entered review cycles with conflicting interpretations. This created revision loops, approval bottlenecks, and accountability uncertainty. This finding emphasized the need for standardized templates, structured review workflows, and clearly defined responsibility boundaries across agreement stages.
The most consistent finding across interviews, workflow simulations, and comparative testing was that incomplete or unclear intake information created cascading delays throughout the agreement lifecycle. Stakeholders across drafting, legal, accounting, and fulfillment teams frequently described spending significant time clarifying service requirements rather than progressing agreements. The legacy intake process relied heavily on static forms that captured basic service requests but lacked contextual detail around scope, operational dependencies, or service outcomes. As a result, customer engagement teams often scheduled follow-up discovery meetings to reconstruct requirements that should have been captured earlier. This introduced delays, duplicated effort, and increased the likelihood of interpretation differences between stakeholders. This finding reinforced that improving agreement efficiency was not primarily a drafting or approval problem; it was an upstream context capture problem.
While stakeholders generally supported AI-assisted intake for improving efficiency, there was strong consensus that complex or novel service requests required human discovery conversations. Participants expressed concern that fully automated intake could oversimplify nuanced service requirements, potentially increasing downstream risk. Comparative experience testing showed that AI guidance improved requirement completeness for structured requests but required escalation mechanisms when ambiguity or complexity increased. This finding reinforced the importance of designing human-in-the-loop workflows where automation accelerates information gathering while preserving relationship-driven context discovery.
Drafting specialists frequently relied on personal template libraries or historical agreements rather than standardized templates. This introduced variability in agreement language, clause structure, and accountability definitions. Legal reviewers often identified inconsistencies late in the approval process, requiring revisions that delayed final sign-off. Participants noted that agreement amendments were particularly vulnerable to inconsistency because prior versions were manually compared and edited. This finding highlighted the need for centralized template governance and version-controlled drafting workflows to reduce risk and improve drafting efficiency.
Workflow simulations revealed that agreements frequently stalled during legal or accounting review stages, not because of disagreement, but because decision authority and escalation pathways were unclear. Stakeholders often sought confirmation from multiple teams before approving agreements, creating redundant review cycles and slowing overall throughput. In many cases, the absence of clearly defined approval routing forced teams to rely on institutional knowledge or informal communication to move agreements forward. This finding demonstrated the importance of automated workflow routing, role-based approval structures, and explicit decision accountability within the future-state design.
Cross-Finding Synthesis
Collectively, the research demonstrated that agreement inefficiency was not driven by individual workflow steps, but by fragmentation across intake, drafting, review, and fulfillment processes. These findings directly informed the design of a unified agreement lifecycle system that integrates AI-assisted intake, role-based dashboards, automated approval routing, template governance, and centralized agreement record management.
The research ensured that the future-state experience addressed both usability and organizational workflow complexity, creating a system that supports efficiency, compliance, and stakeholder trust simultaneously.
Designing Agreements@State required operating within a set of non-negotiable constraints across technology, policy, organization, and risk. Rather than working around these constraints, the solution was intentionally designed through them.
The application was designed to be built in Microsoft Power Apps, with ServiceNow handling request orchestration and AWS powering AI-assisted intake. Each platform introduced hard constraints that directly influenced interaction patterns and system architecture.
Power Apps
Optimized for form-driven, role-based experiences—not complex custom UI logic
Limited support for highly dynamic layouts or conditional UI orchestration at scale
Performance considerations required minimizing screen complexity and nested logic
Design implication
The experience relied on progressive disclosure, clear task sequencing, and simplified interaction models. Complexity was handled in workflow logic rather than in the UI layer.
ServiceNow
Strong at approvals, routing, and audit trails, but rigid once workflows are implemented
Approval logic changes impact reporting, compliance, and downstream systems
Over-customization introduces long-term maintenance risk
Design implication
Approval stages and decision points had to be explicit, minimal, and defensible. Ambiguous UX equals expensive technical debt in ServiceNow.
AWS AI Intake
Still in development during the project
No proven accuracy thresholds for novel or high-risk agreement types
Required normalization before data could enter ServiceNow workflows
Design implication
AI could not be positioned as an authority. It had to act as a context accelerator, with clear escalation to human review when confidence thresholds were not met.
As a federal system, Agreements at State had to align with FISMA security requirements, section 508 accessibility standards, privacy Act and data classification policies, audit-ability and decision traceability expectations. This meant: No “invisible” automation, no undocumented decision-making, and no ambiguity around approval authority.
Design implication
Every interaction had to be auditable. Status changes, approvals, revisions, and escalations needed to be explicit and traceable. Transparency wasn’t just a UX improvement—it was a compliance requirement.
Design implication
The experience had to earn trust gradually. AI was introduced as guidance and structure, not as a replacement for human judgment. Escalation paths were designed as first-class features, not exceptions.
Stakeholders had built informal coping mechanisms around the legacy process; email threads, personal templates, spreadsheets, and side conversations. These workarounds filled gaps but also masked systemic issues.
Design implication
The future-state system could not simply digitize existing behavior. It had to remove the need for workarounds by providing clarity, visibility, and reliability. This required rethinking workflows, not just interfaces.
Reframing the Problem: Experience and Governance Were Interdependent
The Agreements at State initiative required solving two challenges simultaneously. The first was improving user experience across a fragmented agreement lifecycle. The second was strengthening governance and compliance across multiple oversight stakeholders. Research revealed that these two priorities were not competing goals; they were deeply connected. Poor experience design introduced ambiguity, and ambiguity increased compliance risk. Similarly, governance gaps forced stakeholders to rely on manual oversight, which slowed workflows and reduced transparency. The solution required designing a system where usability reinforced governance rather than bypassing it.
The Experience and Governance Solution is not solely about improving interface usability. It involves designing systems that align human behavior, organizational policy, and technology constraints into a cohesive service model. By embedding governance directly into the user experience, Agreements at State established a scalable framework for managing complex service agreements across diverse stakeholder groups.
The experience solution focused on transforming agreements from static documents into dynamic, trackable service workflows.
Context-Rich Intake Experience: Intake@State experience introduces AI-assisted adaptive questioning that adjusted based on service type, complexity, and risk signals. The intake process guided users through structured service discovery while allowing escalation to human engagement when requests required negotiation or clarification. This approach reduced ambiguity at the beginning of the lifecycle while preserving critical relationship-building interactions for complex agreements.
Role-Based Agreement Workspaces: Agreements@State experience introduces role-aware dashboards that tailored agreement visibility, responsibilities, and decision actions to each stakeholder group. Customer engagement teams received workflow coordination dashboards. Legal and accounting reviewers accessed structured approval and compliance review interfaces. Customers gained lifecycle tracking visibility and clear expectations around service milestones. This ensured stakeholders interacted with agreements through a shared system while maintaining role-specific workflows.
Structured Drafting and Template Governance: Drafting inconsistency created both usability challenges and legal risk. Agreements@State introduces standardized templates and guided drafting workflows that helped ensure consistent clause usage, accountability definitions, and service expectations. The drafting interface reduced reliance on personal template libraries while allowing flexibility for specialized agreement types.
Automated Approval Routing and Decision Accountability: Approval bottlenecks were frequently caused by unclear ownership and informal escalation pathways. The solution introduces automated workflow routing through ServiceNow integration, ensuring agreements were directed to the correct reviewers based on service type, funding model, and regulatory requirements. Each approval stage included clearly defined decision authority, reducing redundant review cycles and ensuring accountability remained visible across the lifecycle.
Human-in-the-Loop AI Governance: While AI-assisted intake improved efficiency, the governance solution ensures automation remains accountable and risk-aware. The system incorporates escalation thresholds that triggered human review when requests contained ambiguous requirements, regulatory complexity, or funding uncertainty. This approach ensures AI accelerated structured tasks without replacing critical compliance and relationship-based decision-making.
Centralized Agreement Record and Decision Traceability: Previously, agreement history was fragmented across communication channels and document versions. Agreements@State system establishes a centralized agreement record that preserved decision history, stakeholder inputs, revisions, and approval timestamps. This created a transparent audit trail that strengthened compliance readiness and reduced institutional knowledge loss during handoffs.
Version Control and Template Standardization: Agreement amendments historically introduced inconsistencies and compliance risks. The governance solution introduces controlled version management and standardized template libraries that reduced drafting variability while preserving legal and operational alignment. This reduced revision cycles and strengthened cross-team agreement interpretation consistency.
Although full production metrics were not captured during my engagement, validation testing and stakeholder reviews indicated expected improvements in:
Agreement drafting efficiency
Approval cycle predictability
Stakeholder clarity and trust in service agreements
Agreement discoverability and version control integrity
Compliance audit readiness
The Agreements@State initiative required shifting long-standing workflow habits and communication patterns. Many stakeholders had developed informal workarounds to compensate for system limitations, and those workarounds were deeply embedded in daily operations. Introducing a centralized agreement lifecycle required demonstrating reliability, clarity, and value before stakeholders would trust the new system. Successful adoption depended on aligning user experience improvements with stakeholder incentives, not simply introducing new technology.
This reinforced that enterprise UX success is measured by behavioral adoption as much as interface usability.
Working within Power Apps and ServiceNow reinforced that low-code platforms do not eliminate complexity — they constrain how complexity can be expressed. Overly ambitious interface patterns or heavily customized workflows can create long-term maintenance and performance challenges.
This required designing interaction models that were intentionally structured, modular, and scalable. The constraint forced stronger prioritization of clarity, hierarchy, and decision flow.
The lesson was that designing within platform limitations often produces more resilient enterprise experiences when approached intentionally.
AI-assisted intake introduced real efficiency opportunities, but it also surfaced important trust and governance considerations. Stakeholders supported automation for structured requests but consistently emphasized that complex or unfamiliar service requests required human discovery conversations.
The lesson was that AI adoption in regulated enterprise environments must be introduced with explicit escalation logic and transparency around system confidence. Automation that appears authoritative but lacks accountability creates more risk than value.
Designing AI as a collaborative assistant rather than a decision authority improved both stakeholder adoption and organizational comfort with emerging technology.
This project reinforced that enterprise UX isn’t primarily a design problem; it’s a coordination problem. The interface is the visible layer, but the real work lives underneath: decisions, accountability, handoffs, and risk. Agreements at State looked like an intake + document workflow on the surface. In practice, it was a multi-party system for negotiating scope, responsibility, funding, and compliance—often with incomplete information and competing incentives.
The most important shift I made early on was treating the work as service design, not screen design. When teams complained about cycle time, the instinct could have been to optimize approvals or simplify forms. Research showed the deeper issue was ambiguity entering the system and then amplifying at every handoff. That reframed the “efficiency” goal into a clarity goal: what information needs to exist, at what moment, in what format, so downstream stakeholders can make confident decisions without rework.
The AI intake discussion was also instructive. Leadership’s desire to modernize through AI was understandable, but frontline hesitation wasn’t resistance to innovation; it was resistance to risk without guardrails. The lesson wasn’t “AI is bad” or “humans are better.” The lesson was: AI becomes valuable when it accelerates context capture while staying accountable to human judgment. If you remove human engagement too early, the system may feel faster on day one but you pay for it later in escalations, rework, and trust breakdown.
Working within Power Apps + ServiceNow constraints reinforced another core truth: low-code platforms don’t eliminate complexity; they penalize hidden complexity. If the decision logic isn’t explicit, it becomes brittle. If the workflow isn’t modular, it becomes hard to evolve. That pushed me to design a future state that was realistic to implement: progressive disclosure in the UI, defensible decision points in orchestration, and a centralized record that preserved context across the lifecycle.
I also walked away with a sharper view of governance. Governance is often treated like a separate layer; policy documents, compliance reviews, gatekeeping. In this project, governance succeeded only when it was embedded into the experience: clear ownership, visible status, structured routing, version control, and traceable decisions. In other words, good UX reduced compliance risk because it reduced ambiguity.
Finally, there’s an honest reflection about delivery. The project ending before full implementation is not a failure of design, but it does highlight a reality of complex government modernization: platform dependencies, sequencing, and organizational readiness can outpace build timelines. What I’m proud of is that the work produced a coherent blueprint that aligned stakeholders and reduced risk—so the next team isn’t starting from scratch. In enterprise environments, progress often looks like making the system buildable and the organization ready.