Index / Scribbling Interfaces

Scribbling Interfaces is an experimental, AI-powered design tool that translates scrappy wireframe sketches into components from an existing Design System.

It does so by inferring sketched element types using machine learning and translating them into a previously linked design components.

This allows for quick iterations in low fidelity while evaluating surfaces in high fidelity.

Video of a working prototype. Button, Text and Image placeholders are linked to Figma components and translated into higher fidelities.


Approach

The plugin uses a TensorFlow object recognition model to recognize drawn elements and assign them to a component.

The model was custom-trained using a training set of 500 hand-drawn wireframes which contains a variation of image, button and text frames. 1

Image showing many different wireframes next to each other.

The wireframes were drawn both digitally and analog

Putting things together

After training the model for several generations2 and running a few predictions on a local machine, I've wrapped the model in a small web app so friends and colleagues could interact with the model. This helped me to see if the predictions are accurate (enough) and if the whole idea sparks some excitement.

In the web app, the user would draw the interface on a napkin. A napkin seemed like a good analogy about capturing thoughts and rough ideas, that later would be turned into polished surfaces.

Object inference

When inferring a YOLO object-recognition model, the model is 1.) fed with a source image, 2.) the model then does some inference magic and 3.) eventually returns the image along with respective bounding boxes of every predicted element.

Image of a schematic that shows how the machine learning model sees the sketched image.

A high-level overview of the original input, the segmented output with its bounding boxes and the synthesized mockups.

These bounding boxes provide us with the width, height and x/y position of every detected elements — or in other words, a blueprint we can use to re-draw the image. All we have to do is substitute each bounding box with its respective detected element et voilà, we magically increased the fidelity of our wireframe.

Visualizing the bounding boxes predicted by the ML model.

In the initial napkin version, the translation of scribbles to wireframes was very naïve because it translates the bounding boxes 1:1, without paying attention to ex. alignment or layout. While this can be improved by aligning items in near proximity (as seen in the Figma plugin), there are certainly more sophisticated solutions which ex. could utilize Figma's built-in layout features. Certainly something to work on for a next version.

Context

This project was part and deliverable of Umeå Institute of Design's Experience Prototyping course which was led and mentored by the excellent Jen Skyes and Andreas Refsgaard.

Footnotes

  1. For the sake of keeping this page concise, I'm not diving into technical details or rationales. YOLOv4 was chosen due to its relative ease of use and because similar work used it and vouched for it

  2. A generation (or epoch) refers to one training cycle of a machine learning algorithm. Put simply, each generation increases the precision of the algorithm because the algorithm has more time and data to 'learn'.