top of page
Blog_Header Picture.png
  • Writer's pictureAli Menard

Testing Out the Beta Version of Adobe Firefly



Is Adobe Firefly the AI program designers have been waiting for? It is going to be a game changer? Or is it going to become the thing that replaces graphic designers and people in similar job fields? First of all, what is Adobe Firefly? Firefly is a new AI program still in beta development that is being integrated with other Adobe programs. Adobe has already released the beta version of the integrated Photoshop with Firefly. Firefly offers users more efficient ways to create, design and manipulate artwork while significantly improving their creative workflow.


For this blog, I only tested out the beta version of Firefly. I did not test out the new integrated Photoshop. My primary objective was to get an overall idea for what Firefly will be capable of outside of the other Adobe programs.


Generative Fill

Generative Fill was the first feature I decided to test out because I use the current version of Photoshop to do similar things all the time. The Generative Fill tool offers tools that allow you to select areas of a photo to add things to it, subtract objects from the foreground or to replace the background.


The first few attempts were not successful at all. For the very first attempt, I took a picture of my fiancé, Erik, and my intention was that I would be able to make it look like someone was standing beside him in the photo. To do this, I selected a large portion of the picture to the right of him and prompted Firefly to add, “A beautiful women in a bright red dress”. As you can see from the picture below, the results were laughable. Initially, I thought because his arm was in the way, that was causing the outcome to be so catastrophic. In attempt to fix this problem, I selected the left side of Erik. While slightly better than the first attempt, the results speak for themselves.

First Attempt:


Second Attempt:


With practice, the results I was intending to produce became easier and easier to achieve. Being more specific when describing to the program what I wanted and only making the selection specifically sized for whatever object, animal, or whatever you are going for. For example, instead of saying “bird”, I would type, “Parrot”. Or instead of typing, “dog”, I typed in, “cute puppy” and the results were much more accurate to the vision I had envisioned. As you can see from the examples below, for each selection, I had a particular picture in mind. You can see that I kept the size of the selection relative to what I had in mind for that particular part of the picture.


First selection:


Second Example:


Final Result:




Not the perfect example, but you can visualize how this tool can affect the manipulation of the pictures with ease. As a person who has spent countless hours attempting to create something like this in Photoshop, I can definitively say that this generative fill tool is going to reform how much time is spent transforming photos. Rather than spending hours looking for the right pictures to use, getting the selection just right, adjusting the lighting, etc., one will be able to expedite the process much faster. There will still be some of those steps like adjusting the lighting to match the rest of the photo and getting the selection just right. However, while using Adobe Photoshop that has been integrated with Firefly the process will be much simpler.


Text to Image

Have you ever gone to a stock photography website in attempt to find a picture to use for your project only to be disappointed with the results even after spending hours using various key word combinations? Text to image AI might actually be the solution to that problem. The key word being MIGHT.


The “Text to Image” feature was by far the most straightforward to learn and achieve the artistic composition I was looking for. This feature allowed you pick and choose various elements like “Content Type”, “Style”, “Techniques”, “Color and Tone”, “Lighting”, and “Composition”. While using the ‘graphic’ filter and the ‘neon’ filter, I entered the words “futuristic engineer in lab” to generate the following pictures. Shockingly on the very first try, the pictures Firefly generated were exactly what I had envisioned in my head.


The small victory was short lived because as I continued to provide Firefly with new prompts, the outcomes were not instantly successful. But like with every new program there is a learning curve. Including details about a specific style and the more descriptive you can be, the better the outcome. Even when being as detailed as possible, there were still downfalls. Requesting images of realistic humans doing mundane everyday things produced not so great results. At first glance, the pictures looked great. But, when you started looking at the details around the face or hands, you could see lots of distortion. Below are more examples of the Text to Image tool.



Text Effects

Using the text effects is going to revolutionize the way graphic designers are able to quickly generate custom, creative typography. What once could take hours to create, can now be created in a matter of minutes. Just like with any new program a learning curve is something every designer will experience with this technology. The Text Effects tool allows users to change fonts, how much the user wants the picture to flow outside of the text, background color and the shape of the letters. While I was blown away by how easy it was to use the text effects tool, there were some elements that need to be worked out. For example, just like the Text to Image tool, the more specific the description, the better the results.




Just like the Text to Image tool and the Generative Fill tool, it produced what I was looking for but not in the way I was hoping. If you look closely, some of the animals are distorted. After multiple attempts and numerous word combinations, I discovered a couple “rules” to follow when inserting prompts into the program. Listing multiple items such as animals is best done when you reference specific animals. If you are to list multiple animals or items, do not go overboard with the number of items you input. Be mindful of the space, shape, and size inside the letters that you are working with. And lastly, for optimal results input things that can be abstract or are naturally abstract. In the example below, I prompted Firefly with, “abstract colorful painting drips”. The result was exactly what I had in mind, and I loved the playful nature of the result.


Even in the examples I didn’t show, like I mentioned before, I could tell the AI system performed best when the thing you were describing was either abstract, something without a face, or something unrealistic like “furry friendly monster”. Something thing to keep in mind is that this is just the beta version of the app. Based on my research, supposedly there will be a way to import reference photos to make the pictures more realistic and less distorted. After watching videos of people using the integrated Adobe Photoshop, this is just a small preview of what the app is capable of. More on that to come.


As AI programs started becoming more and more popular, I was very hesitant to accept it not only because it was new and unfamiliar but like most creative workers, I was worried it was going to eliminate the need for people like graphic designers. Now, I realize that graphic designers and other creative types need to embrace AI and learn to use it to enhance their creative skills.

20 views0 comments

About A3 Media

A3 transforms media from an expense into a smart investment. Since 1997, we have successfully helped regional businesses launch new products, expand into new markets and increase sales through media plans that make every dollar spent do more. Our clients include brands such as Yuengling and Ashley Furniture. For more information about how A3 Media can help your digital marketing efforts, please call A3 Media at (610) 631-5500.

bottom of page