I Got Beta Access to Adobe Firefly!

Today I finally got the email I have been waiting for! Beta Access to Adobe Firefly!


I am thrilled to announce that I have been granted beta access to Adobe Firefly. A new family of creative generative AI tools. Both as a creative professional and a Tech Enthusiast, I am always looking for new and innovative tools to help me bring my ideas to life, and Firefly promises to do just that.

What is Adobe Firefly?

Firefly is a family of creative generative AI models (or tools) that is currently being integrated into Adobe products. It focuses initially on generating image and text effects. It offers new ways to ideate, create, and communicate while significantly improving creative workflows.

Personally, I have been a fan of the Adobe Suits for years and it has been my go-to creative suit since I started my journey in Web Development back in 2014. So, I am super excited to see Ai being integrated into Adobes current apps. My expectations are for these tools, to help me bring my own ideas to life, with less time being spend on learning how to use their software.

Currently the beta offers three options:

  • Text to image generation.
  • Text effects, that applies styles or texture to text prompts.
  • Vector recoloring, creating variations of existing artwork.

What is generative Ai?

Generative AI is a type of artificial intelligence that has the ability to generate new data, images, sounds, and other outputs based on the patterns it has learned from existing data. It can take ordinary inputs, like text prompts or simple sketches, and produce extraordinary results, such as videos, documents, digital experiences, rich images, and art. Generative AI models are trained on large datasets to learn patterns and relationships in the data, which they can then use to generate new and unique outputs. Generative AI has many potential applications in fields like art, design, music, gaming, and more.

How has the Beta Access to Adobe Firefly been so far?

With Firefly, I can generate stunning images with simple inputs. The AI models are trained on massive datasets, allowing them to learn patterns and relationships in the data and generate new and unique outputs based on that knowledge.

My first instinct, as a Labrador owner, was to see how well the beta could create me a Labrador wearing a cape. My text input was:

“A Labrador wearing a cape looking over the sunset”.

This was the result.

A Labrador wearing a cape looking over the sunset

Next, I wanted to create a video game character. Unfortunately, the beta didn’t allow me to write “Battle Royale”, as I wanted to create a character that was close to either Fortnite or PUBG. Therefor the prompt I went with was something as simple as:

“A character from a video game”.

This was the result.

A character from a video game

Initial thoughts

So far, the results have been great, although it’s far from perfect. When I tried generating “A cat with the mouth of a human and the eyes of a dog”, it just gave me four examples of a cat with little to no changes. Next up will be to give Midjourney and Adobe Firefly a series of the same Text Prompts to see the creative differences of the two tools.

Would you also like beta access to Adobe Firefly? You can sign up on their website today!