I’m the Co-founder and CTO of Krea. We’re excited because we wanted to release the weights for our model and share it with the HN community for a long time.
My team and I will try to be online and try to answer any questions you may have throughout the day.
Any plans to get into working with the Flux 'Kontext' version, the editing models? I think the use cases of such prompted image editing is just wildly huge. Their demo blew my mind, although I haven't seen the quality of the open weight version yet. It is also a 12B distill.
The open-source community hacking around it and playing with it PLUS talented engineers who may be interested in working with us already makes this release worth it. A single talented distributed systems engineer has a lot of impact here.
Also, the company ethos is around AI hackability/controllability, high-bar for talent, and AI for creatives - so this aligns perfectly.
The fact that Krea serves both in-house and 3rd-Party models tells you that we are not that bullish on models being a moat.
People underestimate how much goodwill companies gain from pushing opensource stuff out, not just from word of mouth but even picking up users for their commercial offerings too, while i could run opensource and appreciate it in a lot of cases using API's from the companies that i like (mostly ones that do opensource stuff) tends to be easier for bigger stuff...
Regarding the P(.|photo) vs P(.|minimal) example, how do you actually decide this conflict? It seems to me that photorealism should be a strong default "bias".
My reasoning: If the user types in "a cat reading a book" then it seems obvious that the result should look like a real cat which is actually reading a book. So it obviously shouldn't have an "AI style", but it also shouldn't produce something that looks like an illustration or painting or otherwise unrealistic. Without further context, a "cat" is a photorealistic cat, not an illustration or painting or cartoon of a cat.
In short, it seems that users who want something other than realism should be expected to mention it in the prompt. Or am I missing some other nuances here?
Nice release. Ran some preliminary tests using the 12b Txt2Img Krea model. Its biggest wins seems to be raw speed (and possibly realism) but perhaps unsurprisingly did not score any higher on the leaderboard for prompt adherence than the normal Flux.1D model.
On another note, there seem to be some indication that Wan 2.2+ future models might end up becoming significant players in the T2I space though you'll probably need a metric ton of LoRAs to cover some of the lack of image diversity.
Can you point to a URL with the tests you’ve done?
Also, FWIW, this model focus was around aesthetics rather than strict prompt adherence. Not to excuse the bad samples, but to emphasize what was one of the research goals.
It’s a thorny trade-off, but an important one if one wants to get rid of what’s sometimes known as “the flux look”.
Re: Wan 2.2 I’ve also been reading of people commenting about using Wan 2.2 for base generation and Krea for the refiner pass which I thought was interesting.
The Image Showdown site actually does have Flux Krea images but they're hidden by default. If you open up the "Customize Models" dialog you can compare them against other Flux models (Flux.1 Dev and Kontext).
> FWIW, this model focus was around aesthetics
Agreed - whereas these tests are really focused on various GenAI image models ability to follow complicated prompts and are not as concerned with overall visual fidelity.
Regarding the "flux look" I'd be interested to see if Krea addresses both the waxy skin look AND the omnipresent shallow depth of field.
Hi! I'm lead researcher on Krea-1. FLUX.1 Krea is a 12B rectified flow model distilled from Krea-1, designed to be compatible with FLUX architecture. Happy to answer any technical questions :)
From a traditional media production background, where media is produced in separate layers, which are then composited together to create a final deliverable still image, motion clip, and/or audio clip - this type of media production through the creation of elements that are then combined is an essential aspect of expense management, and quality control. Current AI image, video and audio generation methods do not support any of that. ForgeUI did briefly, but that went away, which I suspect because few understand large scale media production requirements.
I guess my point being: do you have any (real) experienced media production people working with you? People that have experience working in actual feature film VFX, animated commercial, and multi-million dollar budget productions?
If you really want to make your efforts a wild success, simply support traditional media production. None of the other AI image/video/audio providers seem to understand this, and it is gargantuan: if your tools plugged into traditional media production, it will be adopted immediately. Currently, they are tentatively and not adopted because they do not integrate with production tools or expectations at all.
Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.
Could you go more into detail on the specific loss used for this and any other possible tips for finetuning this that you might have? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.
FLUX.1 is one of the most popular open weights text-to-image models. We distilled Krea-1 to FLUX.1 [dev] model so that the community can adopt it seamlessly into existing ecosystem. Any finetuning code, workflows, etc that was built on top of FLUX.1 [dev] can be reused with our model :)
The architecture is the same so we found that some LoRAs work out-of-the box, but some LoRAs don't. In those cases, I would expect people to re-run their LoRA finetuning with the trainer they've used.
Can someone ELI5 why the safetensor file is 23.8 GB, given the 12B parameter model? Does the model use closer to 24 GB of VRAM or 12 GB of VRAM. I've always associated a 1 billion parameter = 1 GB of VRAM. Is this estimate inaccurate?
Quick napkin math assuming bfloat16 format : 1B * 16 bits = 16B bits = 2GB.
Since it's a 12B parameter model, you get around ~24GB. Downcasting to bfloat16 from float32 comes with pretty minimal performance degradation, so we uploaded the weights in bfloat16 format.
A parameter can be any size float. Lots of downloadable models are FP8 (8 bits per parameter), but it appears this model is FP16 (16 bits per parameter)
Often, the training is done in FP16 then quantized down to FP8 or FP4 for distribution.
My fuzzy understanding, and I'm not at all an expert on this, that the main benefit is that bf16 is less prone to overflow/underflow during calculation, which is a source of bigger problems in both training and inference than the simple loss of precision, so once it became widely supported, it became a commonly-preferred format for models (whether image gen or otherwise) over FP16.
Some of the most famous models were distributed as F32, e.g. GPT-2. As things have shifted more towards mass consumption of model weights it's become less and less common to see.
> As things have shifted more towards mass consumption of model weights it's become less and less common to see.
Not the real reason. The real reason is that training has moved to FP/BF16 over the years as NVIDIA made that more efficient in their hardware, the same reason you're starting to see some models being released in 8bit formats (deepseek).
Of course people can always quantize the weights to smaller sizes, but the master versions of the weights is usually 16bit.
Describing it as "Octopus DJ with no fingers" got rid of the hands for me, but interestingly, also removed every anthropomorphized element of the octopus, so that it was literally just an octopus spinning turntables.
I've never gotten one to make what I am thinking of:
A Galton board. At the top, several inches apart are two holes from which balls drop. One drops blue balls, the other red balls. They form a merged distribution below in columns, demonstrating dual overlapping normal distributions
Imagine one of these: https://imgur.com/a/DiAOTzJ but with two spouts at the top dropping different colored balls
They probably did it because the website might look better without a scrollbar, but they should realize that many browsers hide the scrollbar and they only get displayed when you hover over or when you start scrolling. That said, the scrollbar is always there for me (unless hidden by CSS), and I would not have minded it at all.
We have not added a separate RTX accelerated version for FLUX.1 Krea, but the model is fully compatible with existing FLUX.1 dev codebase. I don't think we made a separate onnx export for it though. Doing 4~8 bit quantized version with SVDQuant would be a nice follow up so that the checkpoint is more friendly for consumer grade hardware.
I'd recommend you offer a clearly documented pathway for companies to license commercial output usage rights if they get the results they seek (i'll know soon enough!)
I usually use https://github.com/axolotl-ai-cloud/axolotl on Lambda/Together for working with these types of models. Curious what others are using? What is the quickest way to get started? They mention Pre-training and Post-training but sadly didnt provide any reference starter scripts.
Thanks! Yes, the inference is pretty straightforward, but the real opportunity IMHO is the custom pre-training and post-training opportunities given the open weights.
Amazing. I can practically smell that owl it looks so darned owl-like.
From the article it doesn’t seem as though photorealism per se was a goal in training; was that just emergent from human preferences, or did it take some specific dataset construction mojo?
I love owls. Photorealism was one of the focus areas for training because "AI look" (e.g. plastic skin) was biggest complaint for FLUX.1 model series. Photorealism was achieved with both careful curation of finetuning and preference dataset.
Cool to see an open weight model for this. But what's the business use case? Is it for people who want to put fake faces on their website that don't look AI generated?
From a business point of view, there are many use-cases. Here's a list in no particular order:
- You can quickly generate assets that can be used _alongside_ more traditional tools such as Adobe Photoshop, After Effects, or Maya/Blender/3ds Max. I've seen people creating diffuse maps for 3D using a mix of diffusion models and manual tweaking with Photoshop.
- Because this model is compatible with the FLUX architecture, we've also seen people personalizing the model to keep products or characters consistent across shots. This is useful in e-commerce and fashion industry. We allow easy training in our website — we labeled it Krea 1 — to do this, but the idea with this release is to encourage people with local rigs and more powerful GPUs to be able to tweak with LoRAs themselves too.
- Then I've seen fascinating use-cases such as UI/UX designers who prompt the model to create icons, illustrations, and sometimes even whole layouts that then they use as a reference (like Pinterest) to refine their designs on Figma. This reminds me of people who have a raster image and then vectorize it manually using the pen tool in Adobe Illustrator.
We also have seen big companies using it for both internal presentations and external ads across marketing teams and big agencies like Publicis.
EDIT: Then there's a more speculative use-case that I have in mind: Generating realistic pictures of food.
While many restaurants have people who either make illustrations of their menu items and others have photographers, the big tail of restaurants do not have the means/expertise to do this. The idea we have from the company perspective is to make it as easy as snapping a few pictures of all your dishes and being able to turn all your menu (in this case) into a set of professional-looking pictures that accurately represent your menu.
Thank you! Glad you find it helpful.
The model is focused on photorealism so it should be able to generate most realistic scenes. Although, I think using 3D engines would be more suitable for typical cases for robotics training since it gives you ground truth data on objects, location, etc.
One interesting use case would be if you are focusing on a robotics task that would require perception of realistic scenes.
We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.
The highest quality finetuning data was hand curated internally.
I would say our post training pipeline is quite similar to SeedDream 2.0 ~ 3.0 series from ByteDance. Similar to them, we use extensive quality filters and internal models to get the highest quality possible. Even from there, we still hand curate a hand-picked subset.
The license, as I understand, applies only to the model, not to the images produced? Otherwise they should respect the license of images it was trained on.
I saw this comparison on reddit[1] between Wan2.2 and FLUX.1 Krea and it doesn't look like Krea is successful at avoiding the “AI look”, whereas Wan succeed brilliantly.
We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.
I mean this started with Stable Diffusion 1.x->XL which were only loosely open, and has just gotten worse with progressively farther from open licensed image gen models being described as “open weights”, but, yes, Flux.1 Krea (like the weights-available versions of Flux.1 from BFL itself) is not open even to the degree of the older versions of Stable Diffusion; weights available and “free-as-in-beer licensed for certain uses”, sure, but not open.
I noticed that the URL for this submission is wrong: I tried to submit the correct URL (https://www.krea.ai/blog/flux-krea-open-source-release) but, for some reason, the submission gets flagged as duplicated and then I can only find this item which has a URL to our old blog post.
In the mean time, I'll setup a server-side redirect from the old blog post to our new one, but it would be nice to fix the link and I don't think I can do it on my side.
OMG. Thank you! I had to setup a CDN-level redirect and I was so confused as to why when I asked others to help, their submissions were flagged as [dupe] or [dead]
Thank you so much! I knew that HN software was advanced, but I didn’t know you guys used Canonical URLs like Google does. Smart and thanks for helping us with this slip!!!
Oh you're welcome - it does lead to a lot of not-obvious problems like this but I think it's worth it overall. It helps with duplicate detection, merging threads, and so on.
Yeah, there are still imperfections. But, it’s surprising to us how much the quality can be improved without the need of a whole pre-training (re-) run.
Is it possible (or do people already do this), to train a classifier to identify the AI look and use it as an adversary to try and maximise both 'quality' and 'not that sort of quality'?
I actually tried a few experiments in early exploration stages! I trained a small classifier to judge AI vs non-AI images. Use it as a reward model to do small RL / post training experiments. Sadly, was not too successful. We found that directly finetuning the model on high quality photorealistic image was most reliable.
Another note about preference optimisation and RL is that it has really high quality ceiling but needs to be very carefully tuned. It's easy to get perfect anatomy and structure if you decide to completely "collapse" the model. For instance, ChatGPT images are collapsed to have slight yellow color palette. FLUX images always have this glossy, plastic texture with overly blurry background. It's similar to reward hacking behavior you see in LLMs where they sound overly nice and chatty.
I had to make a few compromises to balance between "stable, collapsed, boring model" and "unstable, diverse, explorative" model.
I could see how you might need a multi channel classifier so that one exists on a range (A) of -1 = "This looks like AI" to 1="This does not look like AI" and another(R) where 1="The above factor is relevant to this image" to 0="The AI-ness of this image is not a meaningful concept
Then optimise for max (Quality + A*R)
Arguably amplitude of A should do R but I think the AI-ness and the AI-ness-relevance are distinct concepts (It could be highly relevant but it can't tell what it should be).
Human learning and computer processing millions of works are different things. I don't think any human artists have seen as many images as the developers used for training.
Hello everyone.
I’m the Co-founder and CTO of Krea. We’re excited because we wanted to release the weights for our model and share it with the HN community for a long time.
My team and I will try to be online and try to answer any questions you may have throughout the day.
Any plans to get into working with the Flux 'Kontext' version, the editing models? I think the use cases of such prompted image editing is just wildly huge. Their demo blew my mind, although I haven't seen the quality of the open weight version yet. It is also a 12B distill.
Hi. Thanks for this. What is your goal of doing so? From a business standpoint. Or is it purely altruistic?
Haha-classic!
It’s simple: hackability and recruiting!
The open-source community hacking around it and playing with it PLUS talented engineers who may be interested in working with us already makes this release worth it. A single talented distributed systems engineer has a lot of impact here.
Also, the company ethos is around AI hackability/controllability, high-bar for talent, and AI for creatives - so this aligns perfectly.
The fact that Krea serves both in-house and 3rd-Party models tells you that we are not that bullish on models being a moat.
I can say that it's definitely working on me! I hadn't heard of Krea before, and this is a great introduction to your work. Thanks for sharing it.
People underestimate how much goodwill companies gain from pushing opensource stuff out, not just from word of mouth but even picking up users for their commercial offerings too, while i could run opensource and appreciate it in a lot of cases using API's from the companies that i like (mostly ones that do opensource stuff) tends to be easier for bigger stuff...
(unless the code repository and history is embarrassingly bad, which is most repositories)
[dead]
I need model for other language than english
Regarding the P(.|photo) vs P(.|minimal) example, how do you actually decide this conflict? It seems to me that photorealism should be a strong default "bias".
My reasoning: If the user types in "a cat reading a book" then it seems obvious that the result should look like a real cat which is actually reading a book. So it obviously shouldn't have an "AI style", but it also shouldn't produce something that looks like an illustration or painting or otherwise unrealistic. Without further context, a "cat" is a photorealistic cat, not an illustration or painting or cartoon of a cat.
In short, it seems that users who want something other than realism should be expected to mention it in the prompt. Or am I missing some other nuances here?
Nice release. Ran some preliminary tests using the 12b Txt2Img Krea model. Its biggest wins seems to be raw speed (and possibly realism) but perhaps unsurprisingly did not score any higher on the leaderboard for prompt adherence than the normal Flux.1D model.
https://genai-showdown.specr.net
On another note, there seem to be some indication that Wan 2.2+ future models might end up becoming significant players in the T2I space though you'll probably need a metric ton of LoRAs to cover some of the lack of image diversity.
Can you point to a URL with the tests you’ve done?
Also, FWIW, this model focus was around aesthetics rather than strict prompt adherence. Not to excuse the bad samples, but to emphasize what was one of the research goals.
It’s a thorny trade-off, but an important one if one wants to get rid of what’s sometimes known as “the flux look”.
Re: Wan 2.2 I’ve also been reading of people commenting about using Wan 2.2 for base generation and Krea for the refiner pass which I thought was interesting.
The Image Showdown site actually does have Flux Krea images but they're hidden by default. If you open up the "Customize Models" dialog you can compare them against other Flux models (Flux.1 Dev and Kontext).
> FWIW, this model focus was around aesthetics
Agreed - whereas these tests are really focused on various GenAI image models ability to follow complicated prompts and are not as concerned with overall visual fidelity.
Regarding the "flux look" I'd be interested to see if Krea addresses both the waxy skin look AND the omnipresent shallow depth of field.
Hi! I'm lead researcher on Krea-1. FLUX.1 Krea is a 12B rectified flow model distilled from Krea-1, designed to be compatible with FLUX architecture. Happy to answer any technical questions :)
From a traditional media production background, where media is produced in separate layers, which are then composited together to create a final deliverable still image, motion clip, and/or audio clip - this type of media production through the creation of elements that are then combined is an essential aspect of expense management, and quality control. Current AI image, video and audio generation methods do not support any of that. ForgeUI did briefly, but that went away, which I suspect because few understand large scale media production requirements.
I guess my point being: do you have any (real) experienced media production people working with you? People that have experience working in actual feature film VFX, animated commercial, and multi-million dollar budget productions?
If you really want to make your efforts a wild success, simply support traditional media production. None of the other AI image/video/audio providers seem to understand this, and it is gargantuan: if your tools plugged into traditional media production, it will be adopted immediately. Currently, they are tentatively and not adopted because they do not integrate with production tools or expectations at all.
The model looks incredible!
Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.
Could you go more into detail on the specific loss used for this and any other possible tips for finetuning this that you might have? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.
thanks for doing this!
what does " designed to be compatible with FLUX architecture" mean and why is that important?
FLUX.1 is one of the most popular open weights text-to-image models. We distilled Krea-1 to FLUX.1 [dev] model so that the community can adopt it seamlessly into existing ecosystem. Any finetuning code, workflows, etc that was built on top of FLUX.1 [dev] can be reused with our model :)
do LoRAs conflict with your distillation?
The architecture is the same so we found that some LoRAs work out-of-the box, but some LoRAs don't. In those cases, I would expect people to re-run their LoRA finetuning with the trainer they've used.
Can someone ELI5 why the safetensor file is 23.8 GB, given the 12B parameter model? Does the model use closer to 24 GB of VRAM or 12 GB of VRAM. I've always associated a 1 billion parameter = 1 GB of VRAM. Is this estimate inaccurate?
Quick napkin math assuming bfloat16 format : 1B * 16 bits = 16B bits = 2GB. Since it's a 12B parameter model, you get around ~24GB. Downcasting to bfloat16 from float32 comes with pretty minimal performance degradation, so we uploaded the weights in bfloat16 format.
A parameter can be any size float. Lots of downloadable models are FP8 (8 bits per parameter), but it appears this model is FP16 (16 bits per parameter)
Often, the training is done in FP16 then quantized down to FP8 or FP4 for distribution.
I think they are bfloat16, not FP16, but they are both 16bpw formats, so it doesn't make a size difference.
Wiki article on bfloat16 for reference, since it was new to me: https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
pardon the ignorance but it's the first time I've heard of bfloat16.
i asked chat for an explanation and it said bfloat has a higher range (like fp32) but less precision.
what does that mean for image generation and why was bfloat chosen over fp?
My fuzzy understanding, and I'm not at all an expert on this, that the main benefit is that bf16 is less prone to overflow/underflow during calculation, which is a source of bigger problems in both training and inference than the simple loss of precision, so once it became widely supported, it became a commonly-preferred format for models (whether image gen or otherwise) over FP16.
That's a good ballpark for something quantized to 8 bits per parameter. But you can 2x/4x that for 16 and 32 bit.
I've never seen a 32 bit model. There's bound to be a few of them, but it's hardly a normal precision.
Some of the most famous models were distributed as F32, e.g. GPT-2. As things have shifted more towards mass consumption of model weights it's become less and less common to see.
> As things have shifted more towards mass consumption of model weights it's become less and less common to see.
Not the real reason. The real reason is that training has moved to FP/BF16 over the years as NVIDIA made that more efficient in their hardware, the same reason you're starting to see some models being released in 8bit formats (deepseek).
Of course people can always quantize the weights to smaller sizes, but the master versions of the weights is usually 16bit.
And on the topic of image generation models, I think all the Stable Diffusion 1.x models were distributed in f32.
Tried a simple prompt, and got some pretty interesting results:
"Octopus DJ spinning the turntables at a rave."
The human like hands the DJ sprouts are interesting, and no amount of prompting seems to stop them.
Opinionated, as the paper says.
Describing it as "Octopus DJ with no fingers" got rid of the hands for me, but interestingly, also removed every anthropomorphized element of the octopus, so that it was literally just an octopus spinning turntables.
I still get octopus hands, even with just "Octopus DJ with no fingers." nothing else.
Maybe you got a lucky roll :)
I've never gotten one to make what I am thinking of: A Galton board. At the top, several inches apart are two holes from which balls drop. One drops blue balls, the other red balls. They form a merged distribution below in columns, demonstrating dual overlapping normal distributions
Imagine one of these: https://imgur.com/a/DiAOTzJ but with two spouts at the top dropping different colored balls
Its attempts: https://imgur.com/undefined https://imgur.com/a/uecXDzI
Have you tried building one irl? I can't find a video of a double one
I have not. It definitely is not something in training sets :)
hey hn! I'm one of the founders at Krea.
we prepared a blogpost about how we trained FLUX Krea if you're interested in learning more: https://www.krea.ai/blog/flux-krea-open-source-release
Off topic but did you really hide scroll bars on the website? Why...?
They probably did it because the website might look better without a scrollbar, but they should realize that many browsers hide the scrollbar and they only get displayed when you hover over or when you start scrolling. That said, the scrollbar is always there for me (unless hidden by CSS), and I would not have minded it at all.
UI brought to you by vibe code
nah, Krea is just from that side of design twitter where you don't uppercase letters and you can break the rules sometimes. very atypography-coded.
Do you have an NVIDIA optimized version? Similar to how RTX accelerated FLUX.1 Kontext: https://blogs.nvidia.com/blog/rtx-ai-garage-flux-kontext-nim...
We have not added a separate RTX accelerated version for FLUX.1 Krea, but the model is fully compatible with existing FLUX.1 dev codebase. I don't think we made a separate onnx export for it though. Doing 4~8 bit quantized version with SVDQuant would be a nice follow up so that the checkpoint is more friendly for consumer grade hardware.
Relevant links:
- GitHub repository: https://github.com/krea-ai/flux-krea
- Model Technical Report: https://www.krea.ai/blog/flux-krea-open-source-release
- Huggingface model card: https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev
I'd recommend you offer a clearly documented pathway for companies to license commercial output usage rights if they get the results they seek (i'll know soon enough!)
You can find details about the license here: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/mai...
In a nutshell, it follows the same license as BFL Flux-dev model.
uv not working no clicking no torch
and
Cannot access gated repo for url https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev/res.... Access to model black-forest-labs/FLUX.1-Krea-dev is restricted. You must have access to it and be authenticated to access it. Please log in.
I usually use https://github.com/axolotl-ai-cloud/axolotl on Lambda/Together for working with these types of models. Curious what others are using? What is the quickest way to get started? They mention Pre-training and Post-training but sadly didnt provide any reference starter scripts.
We actually have a GitHub repository to help with inference code.
Check this out: https://github.com/krea-ai/flux-krea
Let me see if we can add more details on the blog post and thanks for the flag!
Thanks! Yes, the inference is pretty straightforward, but the real opportunity IMHO is the custom pre-training and post-training opportunities given the open weights.
Amazing. I can practically smell that owl it looks so darned owl-like.
From the article it doesn’t seem as though photorealism per se was a goal in training; was that just emergent from human preferences, or did it take some specific dataset construction mojo?
I love owls. Photorealism was one of the focus areas for training because "AI look" (e.g. plastic skin) was biggest complaint for FLUX.1 model series. Photorealism was achieved with both careful curation of finetuning and preference dataset.
Thanks for the release to the team! Any plans on doing the same for FLUX.1 Kontext?
Cool to see an open weight model for this. But what's the business use case? Is it for people who want to put fake faces on their website that don't look AI generated?
Thanks!
From a business point of view, there are many use-cases. Here's a list in no particular order:
- You can quickly generate assets that can be used _alongside_ more traditional tools such as Adobe Photoshop, After Effects, or Maya/Blender/3ds Max. I've seen people creating diffuse maps for 3D using a mix of diffusion models and manual tweaking with Photoshop.
- Because this model is compatible with the FLUX architecture, we've also seen people personalizing the model to keep products or characters consistent across shots. This is useful in e-commerce and fashion industry. We allow easy training in our website — we labeled it Krea 1 — to do this, but the idea with this release is to encourage people with local rigs and more powerful GPUs to be able to tweak with LoRAs themselves too.
- Then I've seen fascinating use-cases such as UI/UX designers who prompt the model to create icons, illustrations, and sometimes even whole layouts that then they use as a reference (like Pinterest) to refine their designs on Figma. This reminds me of people who have a raster image and then vectorize it manually using the pen tool in Adobe Illustrator.
We also have seen big companies using it for both internal presentations and external ads across marketing teams and big agencies like Publicis.
EDIT: Then there's a more speculative use-case that I have in mind: Generating realistic pictures of food.
While many restaurants have people who either make illustrations of their menu items and others have photographers, the big tail of restaurants do not have the means/expertise to do this. The idea we have from the company perspective is to make it as easy as snapping a few pictures of all your dishes and being able to turn all your menu (in this case) into a set of professional-looking pictures that accurately represent your menu.
Helpful blog post for understanding what kind of data is needed for these models!
Does this have any application for generating realistic scenes for robotics training?
Thank you! Glad you find it helpful. The model is focused on photorealism so it should be able to generate most realistic scenes. Although, I think using 3D engines would be more suitable for typical cases for robotics training since it gives you ground truth data on objects, location, etc.
One interesting use case would be if you are focusing on a robotics task that would require perception of realistic scenes.
Hey thanks! I’ll ask Sangwu to hop here to answer this and give a more research-oriented answer
yeah data really is everything that was the number one lesson from this whole project
How large was the dataset used for post-training?
We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.
How is the data collected?
The highest quality finetuning data was hand curated internally. I would say our post training pipeline is quite similar to SeedDream 2.0 ~ 3.0 series from ByteDance. Similar to them, we use extensive quality filters and internal models to get the highest quality possible. Even from there, we still hand curate a hand-picked subset.
[dead]
Non-commercial license... What's the point even, I can't use it for anything.
The license, as I understand, applies only to the model, not to the images produced? Otherwise they should respect the license of images it was trained on.
for the love of the game, not for corporate greed
yoo i'm also a researcher on the krea 1 project and happy to answer any questions :)
hello Erwann, great work! I have a very technical question just for you: how are you today?
hahahah i'm doing well tianpei good to hear from you!
[dead]
For the Krea team that might be reading: I was trying to evaluate Krea for my image gen use case, and couldn't find:
- cost per image - latency per image
Hope you guys can add it somewhere!
Yup... We have the following: https://www.krea.ai/pricing
Though we wanted to keep this technical blogpost free from marketing fluff, but maybe we over-did it.
However, sometimes it's hard to give an exact price per image, as it depends on resolution, number of steps, whether a LoRA is being used or not, etc.
I saw this comparison on reddit[1] between Wan2.2 and FLUX.1 Krea and it doesn't look like Krea is successful at avoiding the “AI look”, whereas Wan succeed brilliantly.
[1]: https://www.reddit.com/r/StableDiffusion/comments/1mec2dw/te...
Does it work out of the box with other Flux-compatible tools such as sd-scripts?
How much data is the model trained on?
Copying and pasting Sangwu’s answer:
We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.
How is data acquired and curated?
Great, these people are crazy. CONGRATS!!!
Haha thanks! Do you happen to have a use-case for it?
Nitpick: this is not open weights, this is weights available. The license restricts many things like commercial, NSFW, etc.
I mean this started with Stable Diffusion 1.x->XL which were only loosely open, and has just gotten worse with progressively farther from open licensed image gen models being described as “open weights”, but, yes, Flux.1 Krea (like the weights-available versions of Flux.1 from BFL itself) is not open even to the degree of the older versions of Stable Diffusion; weights available and “free-as-in-beer licensed for certain uses”, sure, but not open.
Alright we've made the title not say open, in the hope of routing around this objection.
For Dang or HN-mods:
I noticed that the URL for this submission is wrong: I tried to submit the correct URL (https://www.krea.ai/blog/flux-krea-open-source-release) but, for some reason, the submission gets flagged as duplicated and then I can only find this item which has a URL to our old blog post.
In the mean time, I'll setup a server-side redirect from the old blog post to our new one, but it would be nice to fix the link and I don't think I can do it on my side.
It's because https://www.krea.ai/blog/flux-krea-open-source-release contains this:
Our software follows canonical links when it finds them.I've fixed the link above now (and rolled back the clock on the submission, to make up for lost time) but you might want to fix this for future pages.
OMG. Thank you! I had to setup a CDN-level redirect and I was so confused as to why when I asked others to help, their submissions were flagged as [dupe] or [dead]
Thank you so much! I knew that HN software was advanced, but I didn’t know you guys used Canonical URLs like Google does. Smart and thanks for helping us with this slip!!!
Oh you're welcome - it does lead to a lot of not-obvious problems like this but I think it's worth it overall. It helps with duplicate detection, merging threads, and so on.
[dead]
Images still look AI generated
Yeah, there are still imperfections. But, it’s surprising to us how much the quality can be improved without the need of a whole pre-training (re-) run.
Is it possible (or do people already do this), to train a classifier to identify the AI look and use it as an adversary to try and maximise both 'quality' and 'not that sort of quality'?
I actually tried a few experiments in early exploration stages! I trained a small classifier to judge AI vs non-AI images. Use it as a reward model to do small RL / post training experiments. Sadly, was not too successful. We found that directly finetuning the model on high quality photorealistic image was most reliable.
Another note about preference optimisation and RL is that it has really high quality ceiling but needs to be very carefully tuned. It's easy to get perfect anatomy and structure if you decide to completely "collapse" the model. For instance, ChatGPT images are collapsed to have slight yellow color palette. FLUX images always have this glossy, plastic texture with overly blurry background. It's similar to reward hacking behavior you see in LLMs where they sound overly nice and chatty.
I had to make a few compromises to balance between "stable, collapsed, boring model" and "unstable, diverse, explorative" model.
I could see how you might need a multi channel classifier so that one exists on a range (A) of -1 = "This looks like AI" to 1="This does not look like AI" and another(R) where 1="The above factor is relevant to this image" to 0="The AI-ness of this image is not a meaningful concept
Then optimise for max (Quality + A*R)
Arguably amplitude of A should do R but I think the AI-ness and the AI-ness-relevance are distinct concepts (It could be highly relevant but it can't tell what it should be).
How did you train while ensuring only images consensually acquired were used?
Likely the same way visual artists ensure that they only learn from images with permissive licenses.
Human learning and computer processing millions of works are different things. I don't think any human artists have seen as many images as the developers used for training.
Only if you don't count artists with able eye(s) living their life?
Then let he who hath not sinned cast the first stone.
Yes scale is totally irrelevant, that's what all of FAANG tell us too :/
Guy who gets his morality from big company internal policies.