> Transform this image into a photographed claymation diorama of assorted artisan chocolates and candies […] viewed from a low-angle
Side note: whenever I read prompts for image generation, I notice very specific details which the model obviously ignored. Here the chocolates / candies in the last two images look anything but artisanal. They look very "sterile" and mass-produced. The viewing angle is also not accurate.
Why do we even bother writing such elaborate prompts, when the model ignores most of it anyway?
I have noticed the same thing.The few times I wanted to use image generatation it always failed me in exactly these aspects. I always put if off as a lack of prompting skill on my end. Once you start to keep an eye out for these inconsistencies they turn out to be very common.
I wonder how long it took to come up with all this?
Because if I wanted a spiral of little "buttons" like the last one at the end (and they don't look very much like sweets) I'd be able to knock that out in Blender in an afternoon, and I'm not very good at Blender.
I think you're vastly overestimating the average persons ability to use Blender if you can do that in an afternoon; just figuring out how to place a colored cube and the camera probably takes an afternoon if you pick up Blender for the first time.
And knowing these little tricks to get what you want with image generation models also takes time. Not to mention you need some knowledge on some other software just to make the underlying layout.
I'm glad that we're making progress towards a deeper understanding of what LLMs are inherently good at and what they're inherently bad at (not to say incapable of doing, but stuff that is less likely to work due to fundamental limitations).
There's similarity here with, for example, defining the architecture of software, but letting an LLM write the functions. Or asking an LLM to write you the SQL query for your data analysis, rather than asking it to do your data analysis for you.
What I'd really like to see is a more well defined taxonomy of work and studies on which bits work well with LLMs and which don't. I understand some of this intuitively, but am still building my intuition, and I see people tripping up on this all the time.
People keep throwing this phrase around in relation to LLMs, when not a single “fundamental limitation” has been rigorously demonstrated to exist, and many tasks that were claimed to be impossible for LLMs two years ago supposedly due to “fundamental limitations” (e.g. character counting or phonetics) are non-issues for them today even without tools.
Character counting remains a huge issue without tools.
Are you using only frontier models that are gated behind openai/anthropic/google APIs? Those use tools to help them out behind the scenes. It remains no less impressive, but I think we should be clear.
>People keep throwing this phrase around in relation to LLMs, when not a single “fundamental limitation” has been rigorously demonstrated to exist
Some limitations are not rigorously demonstrated to be fundamental, but continuously present from the first early LLMs yes. Shouldn't the burden of proof be on those who say it can be done?
And some limitations are fundamental, and have been rigorously demonstrated, e.g.:
What part of "Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. " doesn't carry the title, to ask mildly?
The literal best public models still fail to count characters consistently in practice so I’m not sure what you mean. It’s literally a problem we’re still trying to solve at work
What's amazing is that they even can fairly reliably appear to count characters. I mean we're talking about systems that infer sequences not character counters or calculators. They are amazing in unrelated ways and we need to accept this so we can use them effectively.
Character counting errors are a side effect of tokenization, which is a performance optimization. If we scaled the hardware big enough we could train on raw bytes and avoid it.
This is kind of my point, we need to get better at describing the limitations and study them. It seems extremely clear that there are limitations, and not just temporary ones, but structural limitations that existed at the beginning and continue to persist.
That’s false. Larger LLMs learn token decompositions through their training, and in fact modern training pipelines are designed to occasionally produce uncommon tokenizations (including splitting words into individual characters) for this reason. Frontier models have no trouble spelling words even without tools. Even many mid-sized models can do that.
Wait, where can I learn more about this? I don't doubt that varying the tokenization during training improves results, but how does/would that enable token introspection?
Because LLMs can learn that different token sequences represent the same character sequence from training context. Just like they learn much more complex patterns from context.
You can try this out locally with any mid-sized current-gen LLM. You’ll find that it can spell out most atomic tokens from its input just fine. It simply learned to do so.
When I talk about fundamental limitations, I mean limitations that can't be solved, even if they could be improved.
We have improved hallucinations significantly, and yet it seems clear that they are inherent to the technology and so will always exist to some extent.
As a general architecture, an LLM also has limitations that can't be improved unless we switch to another, fundamentally different AI design that's non LLM based.
There are also limitations due to maths and/or physics that aren't fixable under any design. Outside science fiction, there is no technology whose limitations are all fixable.
> There's similarity here with, for example, defining the architecture of software, but letting an LLM write the functions.
Not so long ago, this was how early adopters of LLM coding assistants claimed was the right way to use them in coding tasks: prompt to draft the outline, and then prompt to implement each function. There were even a few posts in HN on blogposts showing off this approach with terms inspired in animation work.
I'm not necessarily suggesting always getting down to literally the function level, although I think that gives you excellent quality control, but having a code-level understanding is clearly an important factor.
Isn’t this sort of just “chain of thought” (i.e. the seminal https://arxiv.org/abs/2201.11903 ) where the user is helping the model 1-shot or k-shot the solution instead of 0-shot? I’ve used a similar technique to great effect. I feel things are so new / moving so fast that it’s hard to have common lingo. So very helpful to have a blog / example! But I wonder if the phenomena has been seen / understood before and just in smaller circles / different name.
TLDR: use SVG to outline image correctly first, then send that image with your text prompt to get Gemini 3.0 Pro to render with correct numbers and text
Yup, that’s exactly what this is. If you’ve been using generative models since the early Stable Diffusion days, it’s a pretty common (and useful!) technique: using a sketch (SVG, drawn, etc) as an ad-hoc "controlnet" to guide the generative model’s output.
Example: In the past I'd use a similar approach to lay out architectural visualizations. If you wanted a couch, chair, or other furniture in a very specific location, you could use a tool like Poser to build a simple scene as an approximation of where you wanted the major "set pieces". From there, you could generate a depth map and feed that into the generative model, at the time SDXL, to guide where objects should be placed.
Like many AI things, it would have been considerably easier just to learn to edit images in GIMP or something. Instead of learning a valuable skill, you spent time working with a model that will be obsolete in a few months. Sunken cost fallacy, I guess.
It’s obviously not a new model capability. But using this well-known, existing capability to solve this particular issue is only obvious after the fact.
It’s a useful trick to have in one’s toolbox, and I’m grateful to the author for sharing it.
It's not novel in the sense that nobody knew about img2img. It's novel in the sense that nobody thought of using img2img to solve this problem in this way.
Ok it might just be me then. I view Nvidia‘s DLSS as a similar thing. There was even this meme that video games will in the future only output basic geometry and the AI layer transforms it into stunning graphics.
It's novel if you never played with img2img, including especially several forms of (text+img)2img. Or, if you never tried editing images by text prompt in recent multimodal LLMs.
That said, I spent plenty of time doing both, and yet it would probably take me a while to arrive at this approach. For some reason, the "draw a sketch, have a model flesh it out" approach got bucketed with Stable Diffusion in my mind, and multimodal LLMs with "take detailed content, make targeted edits to it". So I'm glad the OP posted it.
I was thinking about doing the opposite for the common task of "SVG of a pelican riding a bike". Obviously, directly spitting out the SVG is gonna be bad. But image gen can produce a really stunning photorealistic image easily. Probably a good way to get an LLM to produce a decent bike-pelican SVG is to generate an image first and then get the model to trace it into an SVG. After all, few human beings can generate SVG works of art by just typing out numbers into Notepad. At the core of it, we still rely on looking at it and thinking about it as an image.
This seems analogous to how a human would do it accurately. If you asked an artist to paint stones in a large circular arrangement with the numbers in order in one shot, with no fixes or sketching allowed, it wouldn't be surprising to end up with problems in the arrangement.
The standard objection: if the LLM is supposedly intelligent, why can’t it figure out on its own that this two-step process would achieve a better result?
Because image models at the basic level are just text tokens in, image tokens out. You'd need an agentic process on top to come up with a strategy, review output, try again, and so on.
I believe Nano Banana and gpt-image-2 have a little of this going on, but it's like asking a model to one-shot some code vs having an agentic harness with tools do it. Even the most basic agent can produce better code than ChatGPT can.
LLMs have no concept of what makes the output "good".
Or to put it another way, if the LLM generates an image with jumbled numbers it's because that was the most likely output, hence it was a "good" image according to its weights.
Part of the problem is that it isn't the LLM making the image directly itself, it's the LLM repeatedly prompting edits for a separate edit diffusion model. The Gemini reasoning summary shows part of this. The style of some of the images makes it also clear that it uses an Imagen 4 derived diffusion model underneath.
I wonder whether this could be used to fine-tune image models to provide better outputs. Something like this:
1. Algorithmically generate a underdrawing (e.g. place numbers and shapes randomly in the underdrawing)
2. Algorithmically generate a description of the underdrawing (e.g. for each shape, output text like "there is a square with the number three in the top left corner). You might fuzz this by having an LLM rewrite the descriptions in a variety of ways.
3. Generate a "ground truth" image using the underdrawing and an image+text-to-image model.
4. Use the generated description and the generated "ground truth" image as training data for a text-to-image model.
I hope this kind of stuff puts the idea to rest that we're close to actual AGI. Outsourcing this kind of basic stuff which a real intelligence would be able to do "internally" is a hack which works for this specific case but would prevent further generalizations of the task at hand.
But I'm forseeing the opposite. This kind of tool use will soon be integrated and hidden such that people will eventully say "see we solved the problem that AI can't do 123+456, now we are really really close to AGI. Yeah no, with an AGI, it would have been the AGI itself that would have come up with needing at tool, building the tool and then using the tool. But that's not what LLMs are. They are statistical machines to predict tokens. They are very good at it, but that's not an AGI.
Because the image generation is powered by a diffusion model that is only guided by the transformer model and still has somewhat vague spatial representation especially when it comes to coupling things like counting and complex positioning.
But by using the LLM to generate code like an SVG graphic is made up of, and then using a rasterized image of that SVG as an input to the diffusion model, this takes place of the raw noise input and guides the denoising process of the diffusion model to put the numerical parts in the right spots.
The LLM is putting the SVG in the right order because the code that drives the SVG is just that - code - and the numerical order is easily defined there, even if it has to follow something like a spiral.
Edit: although LLMs now also may be using thinking modes with their feedback during generation to help with complex positioning when drawing something like an SVG, as I just asked claude to generate me one such spiral number SVG and it did so interactively via thinking, and the code generated is incredibly explicit with positions, so, that must help. But the underlaying idea to two-step SVG-to-diffusion model is the real key here.
It's normal to first create a plan, then allow agents to write code. But it seems to be surprising for many to first create a draft / outline of a picture, then go for a final render.
Love the concluding note : it works, but not really.
So LLM/GenAI crave. An entire article to show that it's nearly there, yet it's not, despite convoluted effort to make it just so on a very very niche example.
But if it works part of the time, it's useful. It's easy for a human to check that the numbers are correct, and if they aren't, just regenerate the image. Orders of magnitude easier than creating the image from scratch without the model.
Ive been doing charts for slides like this for a while. Noticed html viz was super reliable, but I could style it with diffusion model. Its very useful for data viz.
I don't think the MoE part has anything to do with it, but the current gen of multimoddal models can do thinking interleaved with autoregressive(?*) image-gen so it's probably not long before they bake this into the RL process, same way native thought obviated need for "think carefully step by step" prompts.
I wish the opposite was true: that when I tell Gemini I want "a diagram of X" that it immediately breaks out Python and mathplotlib, instead of wasting my time with Nano Banana.
Inpainting/guiding from a sketch is how I've always used diffusion models. I thought everyone did that, or at least everyone who wasn't just trying to get some arbitrary filler material without much care of what the output looked like.
A few months ago I tried to make Le-chat Mistral output a French poetry in Alexandrin (12 vowels). Disastrous at first. Then adding in specifications that each line had to also be transposed in IPA and each syllable counted, it went better.
Still emotionally unrelatable, but definitely was providing something that match the specifications of there are explicit and systematically enforced through deterministitic means. For now I retain that LLM limitations are thus that they can't seize the ineffable and so untrustworthy they can only be employed under very clear and inescapable constraints or they will go awry just as sure as water is wet.
tldr: do a standard img2img workflow where you lay out a skeleton or skeleton or low-res version, and then turn it into the final high-quality photorealistic version, instead of trying to zeroshot it purely from a text prompt.
> Transform this image into a photographed claymation diorama of assorted artisan chocolates and candies […] viewed from a low-angle
Side note: whenever I read prompts for image generation, I notice very specific details which the model obviously ignored. Here the chocolates / candies in the last two images look anything but artisanal. They look very "sterile" and mass-produced. The viewing angle is also not accurate.
Why do we even bother writing such elaborate prompts, when the model ignores most of it anyway?
I have noticed the same thing.The few times I wanted to use image generatation it always failed me in exactly these aspects. I always put if off as a lack of prompting skill on my end. Once you start to keep an eye out for these inconsistencies they turn out to be very common.
I believe most detailed prompts are AI generated.
That's funny if it's true. I'd like to see the prompt which generates the prompt.
I wonder how long it took to come up with all this?
Because if I wanted a spiral of little "buttons" like the last one at the end (and they don't look very much like sweets) I'd be able to knock that out in Blender in an afternoon, and I'm not very good at Blender.
I think you're vastly overestimating the average persons ability to use Blender if you can do that in an afternoon; just figuring out how to place a colored cube and the camera probably takes an afternoon if you pick up Blender for the first time.
And knowing these little tricks to get what you want with image generation models also takes time. Not to mention you need some knowledge on some other software just to make the underlying layout.
I guess I'm coming at it from having used Blender for an afternoon or so, and already knowing Python.
If you were good at GLSL you could do it in that maybe.
Someone somewhere is going to write something that directly draws it to a framebuffer in Brainfuck, you just know it, don't you?
I remember opening Blender for the first time years ago and thinking it had the steepest learning curve of any software I'd ever used.
I'm glad that we're making progress towards a deeper understanding of what LLMs are inherently good at and what they're inherently bad at (not to say incapable of doing, but stuff that is less likely to work due to fundamental limitations).
There's similarity here with, for example, defining the architecture of software, but letting an LLM write the functions. Or asking an LLM to write you the SQL query for your data analysis, rather than asking it to do your data analysis for you.
What I'd really like to see is a more well defined taxonomy of work and studies on which bits work well with LLMs and which don't. I understand some of this intuitively, but am still building my intuition, and I see people tripping up on this all the time.
> due to fundamental limitations
People keep throwing this phrase around in relation to LLMs, when not a single “fundamental limitation” has been rigorously demonstrated to exist, and many tasks that were claimed to be impossible for LLMs two years ago supposedly due to “fundamental limitations” (e.g. character counting or phonetics) are non-issues for them today even without tools.
Character counting remains a huge issue without tools.
Are you using only frontier models that are gated behind openai/anthropic/google APIs? Those use tools to help them out behind the scenes. It remains no less impressive, but I think we should be clear.
>People keep throwing this phrase around in relation to LLMs, when not a single “fundamental limitation” has been rigorously demonstrated to exist
Some limitations are not rigorously demonstrated to be fundamental, but continuously present from the first early LLMs yes. Shouldn't the burden of proof be on those who say it can be done?
And some limitations are fundamental, and have been rigorously demonstrated, e.g.:
https://arxiv.org/abs/2401.11817?utm_source=chatgpt.com
That paper’s abstract doesn’t carry its title, to put it mildly.
What part of "Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. " doesn't carry the title, to ask mildly?
I don’t agree with that definition of “hallucination”, for starters.
The literal best public models still fail to count characters consistently in practice so I’m not sure what you mean. It’s literally a problem we’re still trying to solve at work
What's amazing is that they even can fairly reliably appear to count characters. I mean we're talking about systems that infer sequences not character counters or calculators. They are amazing in unrelated ways and we need to accept this so we can use them effectively.
Of course, they’re shockingly powerful, just in an incredibly “spiky” way
Is character counting actually not an issue anymore? Do you know somewhere where I can read more about this?
Character counting errors are a side effect of tokenization, which is a performance optimization. If we scaled the hardware big enough we could train on raw bytes and avoid it.
No, tokenization is not the only reason. A next-word predictor has fundamentally a hard time executing algorithms, even as simple as counting.
Your comment, after removing the particulars, has a shape of:
People have an <opinion> which hasn't been rigorously proven, while <not rigorously proven counteropinion>.
As such, I am not sure what you're trying to achieve here.
Drawing five fingered humans was a fundamental limitation... until it's not.
This is kind of my point, we need to get better at describing the limitations and study them. It seems extremely clear that there are limitations, and not just temporary ones, but structural limitations that existed at the beginning and continue to persist.
Yeah I think it was the word "fundamental" he took issue with.
If you remove the auxiliary tools and just leave the core LLM then strawberry still has an undefined number of `r`s in it.
That’s false. Larger LLMs learn token decompositions through their training, and in fact modern training pipelines are designed to occasionally produce uncommon tokenizations (including splitting words into individual characters) for this reason. Frontier models have no trouble spelling words even without tools. Even many mid-sized models can do that.
Wait, where can I learn more about this? I don't doubt that varying the tokenization during training improves results, but how does/would that enable token introspection?
Because LLMs can learn that different token sequences represent the same character sequence from training context. Just like they learn much more complex patterns from context.
You can try this out locally with any mid-sized current-gen LLM. You’ll find that it can spell out most atomic tokens from its input just fine. It simply learned to do so.
of course, if you choose to ignore all the limitations they indeed have no limitations.
Nobody says they have no limitations. The question is are those limitation fundamental, i.e. can we expect improvement, say within a year.
When I talk about fundamental limitations, I mean limitations that can't be solved, even if they could be improved.
We have improved hallucinations significantly, and yet it seems clear that they are inherent to the technology and so will always exist to some extent.
“Seems clear” based on what?
For one, based on continuously frustrated hopes (and promises!) that hallucinations will go away.
As a general architecture, an LLM also has limitations that can't be improved unless we switch to another, fundamentally different AI design that's non LLM based.
There are also limitations due to maths and/or physics that aren't fixable under any design. Outside science fiction, there is no technology whose limitations are all fixable.
Here's one: https://arxiv.org/abs/2401.11817?utm_source=chatgpt.com
> There's similarity here with, for example, defining the architecture of software, but letting an LLM write the functions.
Not so long ago, this was how early adopters of LLM coding assistants claimed was the right way to use them in coding tasks: prompt to draft the outline, and then prompt to implement each function. There were even a few posts in HN on blogposts showing off this approach with terms inspired in animation work.
In short, LLMs are pretty great at working at a single level of abstraction at a time.
You can go from the highest level and all the way down to the lowest level with LLMs, you just have to work at it iteratively one level at a time.
I'm not necessarily suggesting always getting down to literally the function level, although I think that gives you excellent quality control, but having a code-level understanding is clearly an important factor.
I found a simple technique to get reliable text and numbers in AI generated images.
I’m surprised the image models aren’t already doing this, so wanted to share since I’m finding this so useful
Isn’t this sort of just “chain of thought” (i.e. the seminal https://arxiv.org/abs/2201.11903 ) where the user is helping the model 1-shot or k-shot the solution instead of 0-shot? I’ve used a similar technique to great effect. I feel things are so new / moving so fast that it’s hard to have common lingo. So very helpful to have a blog / example! But I wonder if the phenomena has been seen / understood before and just in smaller circles / different name.
TLDR: use SVG to outline image correctly first, then send that image with your text prompt to get Gemini 3.0 Pro to render with correct numbers and text
This is just img2img where first image with correct structure was generated by code.
Yup, that’s exactly what this is. If you’ve been using generative models since the early Stable Diffusion days, it’s a pretty common (and useful!) technique: using a sketch (SVG, drawn, etc) as an ad-hoc "controlnet" to guide the generative model’s output.
Example: In the past I'd use a similar approach to lay out architectural visualizations. If you wanted a couch, chair, or other furniture in a very specific location, you could use a tool like Poser to build a simple scene as an approximation of where you wanted the major "set pieces". From there, you could generate a depth map and feed that into the generative model, at the time SDXL, to guide where objects should be placed.
Pretty much what the author said- just gave some context for the uninitiated
Right, but you can use a different (codegen) model to make that code.
Like many AI things, it would have been considerably easier just to learn to edit images in GIMP or something. Instead of learning a valuable skill, you spent time working with a model that will be obsolete in a few months. Sunken cost fallacy, I guess.
This hack definitely falls in the “duh, why didn’t I think of that” category of tricks, but glad to now have it next time imagegen comes up short
Even the original stable diffusion app had image 2 image. It just didn’t work as well. I‘m not sure why this is supposed to be novel.
It’s obviously not a new model capability. But using this well-known, existing capability to solve this particular issue is only obvious after the fact.
It’s a useful trick to have in one’s toolbox, and I’m grateful to the author for sharing it.
It's not novel in the sense that nobody knew about img2img. It's novel in the sense that nobody thought of using img2img to solve this problem in this way.
Ok it might just be me then. I view Nvidia‘s DLSS as a similar thing. There was even this meme that video games will in the future only output basic geometry and the AI layer transforms it into stunning graphics.
It's novel if you never played with img2img, including especially several forms of (text+img)2img. Or, if you never tried editing images by text prompt in recent multimodal LLMs.
That said, I spent plenty of time doing both, and yet it would probably take me a while to arrive at this approach. For some reason, the "draw a sketch, have a model flesh it out" approach got bucketed with Stable Diffusion in my mind, and multimodal LLMs with "take detailed content, make targeted edits to it". So I'm glad the OP posted it.
I was thinking about doing the opposite for the common task of "SVG of a pelican riding a bike". Obviously, directly spitting out the SVG is gonna be bad. But image gen can produce a really stunning photorealistic image easily. Probably a good way to get an LLM to produce a decent bike-pelican SVG is to generate an image first and then get the model to trace it into an SVG. After all, few human beings can generate SVG works of art by just typing out numbers into Notepad. At the core of it, we still rely on looking at it and thinking about it as an image.
This seems analogous to how a human would do it accurately. If you asked an artist to paint stones in a large circular arrangement with the numbers in order in one shot, with no fixes or sketching allowed, it wouldn't be surprising to end up with problems in the arrangement.
We've been doing this for a long time now, it's similar to using a depth map or a line drawing to control the silhouette.
The standard objection: if the LLM is supposedly intelligent, why can’t it figure out on its own that this two-step process would achieve a better result?
Because image models at the basic level are just text tokens in, image tokens out. You'd need an agentic process on top to come up with a strategy, review output, try again, and so on.
I believe Nano Banana and gpt-image-2 have a little of this going on, but it's like asking a model to one-shot some code vs having an agentic harness with tools do it. Even the most basic agent can produce better code than ChatGPT can.
Because the LLM is more or less hardcoded to just pass "create image" style prompts to a separate model, possibly with some embellishment.
Nobody asked it to!
If it’s asked to generate an image, it should to everything in its powers to make the image good.
> it should do everything in its powers
That's a scary thought.
Hey Claude, why haven't you finished yet? ... Because the human I'm holding hostage hasn't finished the drawing yet.
LLMs have no concept of what makes the output "good". Or to put it another way, if the LLM generates an image with jumbled numbers it's because that was the most likely output, hence it was a "good" image according to its weights.
You don’t know what you don’t know
Part of the problem is that it isn't the LLM making the image directly itself, it's the LLM repeatedly prompting edits for a separate edit diffusion model. The Gemini reasoning summary shows part of this. The style of some of the images makes it also clear that it uses an Imagen 4 derived diffusion model underneath.
I wonder whether this could be used to fine-tune image models to provide better outputs. Something like this:
1. Algorithmically generate a underdrawing (e.g. place numbers and shapes randomly in the underdrawing)
2. Algorithmically generate a description of the underdrawing (e.g. for each shape, output text like "there is a square with the number three in the top left corner). You might fuzz this by having an LLM rewrite the descriptions in a variety of ways.
3. Generate a "ground truth" image using the underdrawing and an image+text-to-image model.
4. Use the generated description and the generated "ground truth" image as training data for a text-to-image model.
That would complexity the architecture of a model, to solve a finite set of cases. That's an argument for specialised/fine tuned models though.
LLMs are like a box of chocolates...
I hope this kind of stuff puts the idea to rest that we're close to actual AGI. Outsourcing this kind of basic stuff which a real intelligence would be able to do "internally" is a hack which works for this specific case but would prevent further generalizations of the task at hand.
But I'm forseeing the opposite. This kind of tool use will soon be integrated and hidden such that people will eventully say "see we solved the problem that AI can't do 123+456, now we are really really close to AGI. Yeah no, with an AGI, it would have been the AGI itself that would have come up with needing at tool, building the tool and then using the tool. But that's not what LLMs are. They are statistical machines to predict tokens. They are very good at it, but that's not an AGI.
How is it that LLMs aren’t good at rendering the sequence of numbers but can reliably put the supplied pieces all in the right order?
Because the image generation is powered by a diffusion model that is only guided by the transformer model and still has somewhat vague spatial representation especially when it comes to coupling things like counting and complex positioning.
But by using the LLM to generate code like an SVG graphic is made up of, and then using a rasterized image of that SVG as an input to the diffusion model, this takes place of the raw noise input and guides the denoising process of the diffusion model to put the numerical parts in the right spots.
The LLM is putting the SVG in the right order because the code that drives the SVG is just that - code - and the numerical order is easily defined there, even if it has to follow something like a spiral.
Edit: although LLMs now also may be using thinking modes with their feedback during generation to help with complex positioning when drawing something like an SVG, as I just asked claude to generate me one such spiral number SVG and it did so interactively via thinking, and the code generated is incredibly explicit with positions, so, that must help. But the underlaying idea to two-step SVG-to-diffusion model is the real key here.
It's normal to first create a plan, then allow agents to write code. But it seems to be surprising for many to first create a draft / outline of a picture, then go for a final render.
Has anyone built a platform which has image to image pipelines and lets you use prompt to SVG generation from SOTA LLMs?
ComfyUI?
Has anyone had good luck with making consistent game art and assets?
And what happens if the model can't come up with a good enough SVG to begin with?
Transformers are great translators. So, yeah, starting with structured output like SVG is probably the best way to start.
It should be fairly trivial to fix any logic errors in the structured output, too.
Love the concluding note : it works, but not really.
So LLM/GenAI crave. An entire article to show that it's nearly there, yet it's not, despite convoluted effort to make it just so on a very very niche example.
But if it works part of the time, it's useful. It's easy for a human to check that the numbers are correct, and if they aren't, just regenerate the image. Orders of magnitude easier than creating the image from scratch without the model.
Ive been doing charts for slides like this for a while. Noticed html viz was super reliable, but I could style it with diffusion model. Its very useful for data viz.
inb4 this technique is subsumed into the next MoE model release
LLMs are evolving so fast I wouldn’t be surprised if this technique was not needed in <6 months
I don't think the MoE part has anything to do with it, but the current gen of multimoddal models can do thinking interleaved with autoregressive(?*) image-gen so it's probably not long before they bake this into the RL process, same way native thought obviated need for "think carefully step by step" prompts.
LLMs are rather devolving at this point.
I wondered why I was losing all passion for creating. These tips and tricks are part of the answer.
Wait, where did it get the "Sweet Path//Trail of treats" thing from in the SVG? It wasn't about sweets at that point. Something missing here, I think.
I wish the opposite was true: that when I tell Gemini I want "a diagram of X" that it immediately breaks out Python and mathplotlib, instead of wasting my time with Nano Banana.
Inpainting/guiding from a sketch is how I've always used diffusion models. I thought everyone did that, or at least everyone who wasn't just trying to get some arbitrary filler material without much care of what the output looked like.
I feel sorry for the recipient.
A few months ago I tried to make Le-chat Mistral output a French poetry in Alexandrin (12 vowels). Disastrous at first. Then adding in specifications that each line had to also be transposed in IPA and each syllable counted, it went better.
Still emotionally unrelatable, but definitely was providing something that match the specifications of there are explicit and systematically enforced through deterministitic means. For now I retain that LLM limitations are thus that they can't seize the ineffable and so untrustworthy they can only be employed under very clear and inescapable constraints or they will go awry just as sure as water is wet.
tldr: do a standard img2img workflow where you lay out a skeleton or skeleton or low-res version, and then turn it into the final high-quality photorealistic version, instead of trying to zeroshot it purely from a text prompt.