As part of my MSc in Creative Computing at UAL, I explored how generative AI models like Stable Diffusion can be made more accessible through interface design.
Designing a minimal interface for generative AI
Instead of exposing complex parameters, I built a minimal web interface with a single prompt input and a “Generate” button. The goal was to reduce cognitive load and focus purely on the relationship between text input and visual output.
Stable Diffusion is powerful but often accessed through technical environments with many adjustable parameters. For non-technical users, this can feel overwhelming.
I wanted to explore:What happens if we strip the interface back to the essentials?
What I Built
• Text-to-image generation using Stable Diffusion
• Local model execution on my M1 MacBook
• Image output generation and disk saving
• Basic regeneration flow
The interface intentionally removed raw parameter controls (steps, seed, guidance scale), allowing users to focus on prompt clarity rather than technical tuning.
Design Thinking
By reducing the interface to one input and one action, the project emphasized:
• The power of language in shaping output
• The unpredictability of generative systems
• The tension between simplicity and control
This minimal structure made the diffusion model’s behavior more visible — users could directly observe how small changes in prompt wording affected results.
Reflection
Building the first iteration in Python helped me understand the Stable Diffusion pipeline and how text-to-image generation works under the hood.
This project reinforced an important design principle for me:Powerful AI systems don’t always need more controls — sometimes they need clearer constraints.