Testing Out the Beta Version of the New Firefly Integrated Adobe Photoshop - Part 2
In part 1 of this blog (read here), I talked about testing out the newly integrated Adobe Photoshop with AI software. I explored swapping out backgrounds for a new one and expanding the background of a photo. In this blog, I explore creating something out of nothing by using the new AI tools.
Additive Image Generation
With my constant frustration trying to find the perfect stock photo for every project, I wanted to test how well AI could add objects or products to pictures based on descriptions. I started by choosing a background I knew I could add objects to; a simple wooden table. I tried to think of photos people might use in their work. After making a selection on the table, I prompted the program to insert, “a variety of beauty and skincare products in various bottles in various sizes in the beige color spectrum. As you can see from the results, it wasn’t a bad start. With a few quick touch-ups, this image would be perfect for mocking up a logo on different kinds of packaging to determine which bottle would work best for a beauty brand.
Again, what good is an experiment if you can’t generate the same results twice? As you can see from the picture, many of the bottles are covered where the label would go. Not the greatest option when attempting to mockup a logo or package design. Using the same background, I prompted the program to “insert a few lotion bottles in a couple of different sizes in the beige color palate.” This time, the resulting picture was something that would be much easier to manipulate to test out logos or package design.
Text to Image Generation
For the last test, I wanted to see how well the AI software could generate pictures based on a single description. As I stated before, it can be highly frustrating spending hours trying to find the perfect stock photo for every project and if using AI can speed that process along, I am all for it. Because I am not the best at describing entire scenes, I decided to look back through the videos I had gone through earlier to read some of the descriptions others were using.
Starting with a blank canvas, I selected the entire area and prompted the program to insert, “Realistic modern office with multiple dark wooden cubicles with an open floor plan, tall beige ceilings, light brown wooden floor, blue walls with large window and plenty of overhead lighting.” Unfortunately, getting the result I was looking for took a couple different tries with the wording in the description box. Upon closer inspection, there are a couple little things that do look wonky but in a pinch, one could fix those and use this picture for a project.
With each little experiment I chose to do, I learned more tricks and tips on how to avoid having not so great outcomes. I cannot stress this enough. The more descriptive you are in the prompt box, the better your results. Instead of saying “toy truck”, say “red fire toy truck with big black wheels and lights on the top.” When expanding the background of an image, use things that are already in the picture during your description. Just like the online version, keeping your selection based on the object or background you are describing will provide you with optimal results. For example, if you are attempting to add clouds to your background, only select the sky. Pro tip: If you aren’t satisfied with the results the program generates, simply click the “Generate” button again and the program will produce more results.
After exploring the integrated Photoshop with the new AI features, it makes me excited to spend even more time exploring all the program has to offer, especially once it is out of the beta testing stage. There were also many things I saw during my research that I would love to take the time to try such as changing someone’s outfit or creating beautiful abstract pictures describing a scene from a storybook. How cool would that be? More on that in my next blog.