AI & Photography Workshop: Exploring Reve Image and Stable Diffusion with Xiaobo Fu

Date: 10:00 - 17:00, 10th August 2025

Further Information

Date: 10:00 - 17:00, 10th August 2025
Price: £99 (Early Bird till 1st August) / £120
Workshop Size: 10 People
Location: Studio I, Too Young Too Simple, 80a Ashfield Street, London, E1 2BJ
Important Notes:

Please bring your own laptop to the class.

To use Reve Image and Stable Diffusion, you will need to spend approximately £20 on credits or cloud computing resources
Book Now
Interested in creating striking images like the ones below and exploring how AI can enhance your photography whether for commercial work or personal projects? Join this workshop to discover the full potential of AI in photography.

Workshop Content

1. General Introduction to Generative AI (1 Hour)

We will begin with a plain-language explanation of how generative AI tools such as Midjourney, DALL·E, and Stable Diffusion work. You’ll learn the basic mechanisms behind neural networks and how they can create images from text. We’ll especially focus on the "magic" of Stable Diffusion, which turns random noise into meaningful images.

This section won't get too technical—the goal is to help you understand the core concepts behind generative AI. This foundation is important for writing better prompts and gaining more creative control over the results.
2. How to Write Better Prompts (2 Hours)

Next, we’ll explore what a "prompt" is and why it's so important in working with generative AI. You'll learn practical tips for crafting clear, detailed prompts that lead to better outputs.

We'll use a trending tool called Reve Image 1.0, which is specially developed for creatives—designers, artists, photographers, and filmmakers. This tool is capable of generating more realistic and aesthetically pleasing images, and it will serve as our playground for experimenting with prompt writing.
3. Gaining More Control Over the Image (3 Hours)

Text prompts alone offer limited control over AI outputs. In this section, we’ll move on to a more advanced tool: Stable Diffusion. While it's extremely powerful, it can also be complex and intimidating at first.

We’ll start with setting up Stable Diffusion in the cloud and introduce its WebUI interface, breaking down what might initially seem overwhelming.
Once you're comfortable with the basics, we'll dive into ControlNet, an extension of Stable Diffusion that gives you precise control over the generation process. You'll learn how to use sketches, poses, depth maps, and more to guide the AI in creating exactly the kind of image you envision.

About Xiaobo Fu

Xiaobo Fu is a photographer, visual artist, and PhD student in Computer Arts at Goldsmiths, University of London. He also works as a studio manager at Our Studio.

Xiaobo creates experimental photography and digital art. His work explores perspective, identity and the interaction of people with technology. Some of his main projects include Aura Aura, which turns portraits into wave-like images, and Chinese Calligraphy Style Transfer, which uses AI to change and recreate writing styles.

His work has been shown in Ambika P3 Gallery, the Institute of International Visual Arts, and the China Design Centre in London. In 2021, he won the Google AMI Research Award for a group project that used movement and AI to create calligraphy and graffiti.

Xiaobo's Website: xiaobofu.co.uk
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram