I need to extract article content, determine it's sentiment towards a keyword and output a simple json with article name, url, sentiment and some text around the found keyword.
Currently I'm having problems with the json output, it's not reliable enough and produces a lot of false json.
> LLMs return malformed JSON more often than you'd expect, especially with nested arrays and complex schemas. One bad bracket and your pipeline crashes.
This might be one reason why Claude Code uses XML for tool calling: repeating the tag name in the closing bracket helps it keep track of where it is during inference, so it is less error prone.
Yeah that's a good observation. XML's closing tags give the model structural anchors during generation — it knows where it is in the nesting. JSON doesn't have that, so the deeper the nesting the more likely the model loses track of brackets.
We see this especially with arrays of objects where each object has optional nested fields. For complex nested objects, the model can get all items well formatted but one with an invalid field of wrong type. That's why we put effort into the repair/recovery/sanitization layer — validate field-by-field and keep what's valid rather than throwing everything out.
We do see fewer invalid JSONs on latest bigger LLMs but still can happen on smaller and cheaper models. There is also case when input is truncated or a required field not found, which are inherently difficult.
On XML vs JSON, I think the goal here is to generate typed output where JSON with zod shines - for example the result can type check and be inserted to database typed columns later
Thing is even with XML LLM will fail every now and then.
I've built an agent in both tool calling and by parsing XML
You always need a self correcting loop built in, if you are editing a file with LLM you need provide hints so LLM gets it right the second time or 3rd or n time.
Just by switching to XML you'll not get that.
I used to use XML now i only use it for examples in in system prompt for model to learn. That's all
This looks pretty interesting! I haven't used it yet, but looked through the code a bit, it looks like it uses turndown to convert the html to markdown first, then it passes that to the LLM so assuming that's a huge reduction in tokens by preprocessing. Do you have any data on how often this can cause issues? ie tables or other information being lost?
Then langchain and structured schemas for the output along w/ a specific system prompt for the LLM. Do you know which open source models work best or do you just use gemini in production?
Also, looking at the docs, Gemini 2.5 flash is getting deprecated by June 17th https://ai.google.dev/gemini-api/docs/deprecations#gemini-2.... (I keep getting emails from Google about it), so might want to update that to Gemini 3 Flash in the examples.
HTML -> markdown -> LLM is standard practice. We strip elements like aside, embed, head , iframe etc. the criteria is conservatively set to avoid removing too many elements (especially in extractMain mode)
Good point. The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — things like CDP leak fixes so Cloudflare doesn't block you mid-session. It's not about bypassing access restrictions.
Our main use case is retail price monitoring — comparing publicly listed product prices across e-commerce sites, which is pretty standard in the industry. But fair point, we should make that clearer in the README.
robots.txt is the most basic access restrictions and it doesn't even read it, while faking itself as human[0]. It is about bypassing access restrictions.
How can people believe that you are respecting bot detection in production when your software's README says it can "Avoid detection with built-in anti-bot patches"?
Put things to perspective - Gemini 2.5 flash is 0.3/1M tokens - assuming each page is 700 tokens and output is not much you are looking at $210 for 1M pages
My platform has 24M pages on 8 domains and these NASTY crawlers insist on visiting every single one of them. For every 1 real visitor there are at least 300 requests from residential proxies. And that's after I blocked complete countries like Russia, China, Taiwan and Singapore.
Even Cloudflares bot filter only blocks some of them.
I'm using honeypot URLs right now to block all crawlers that ignore rel="nofollow", but they appear to have many millions of devices. I wouldn't be surprised if there are a gazillion residential routers, webcams and phones that are hacked to function as a simple doorways.
The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — fixing CDP leaks, removing automation flags, etc. For sites behind Cloudflare or Datadome, that alone usually isn't enough — you'll need residential proxies and proper browser fingerprints on top. The library supports connecting to remote scraping browsers via WebSocket and proxy configuration for those cases.
It may or may not be, but if you want people to actually use this product I’d suggest improving your documentation and replies here to not look like raw Claude output.
I also doubt the premise that about malformed JSON. I have never encountered anything like what you are describing with structured outputs.
Good point. The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — things like CDP leak fixes so Cloudflare doesn't block you mid-session. It's not about bypassing access restrictions.
Our main use case is retail price monitoring — comparing publicly listed product prices across e-commerce sites, which is pretty standard in the industry. But fair point, we should make that clearer in the README.
> comparing publicly listed product prices across e-commerce sites
Those prices and information is for the public viewers, the reason why some people have ROBOTS.txt for example is to reduce the traffic load that slop crawlers generate. The bandwidth is not free so why would you assume to ignore their ROBOTS.txt when you're not footing the bill ?
Would this work for my use case?
I need to extract article content, determine it's sentiment towards a keyword and output a simple json with article name, url, sentiment and some text around the found keyword.
Currently I'm having problems with the json output, it's not reliable enough and produces a lot of false json.
> LLMs return malformed JSON more often than you'd expect, especially with nested arrays and complex schemas. One bad bracket and your pipeline crashes.
This might be one reason why Claude Code uses XML for tool calling: repeating the tag name in the closing bracket helps it keep track of where it is during inference, so it is less error prone.
Yeah that's a good observation. XML's closing tags give the model structural anchors during generation — it knows where it is in the nesting. JSON doesn't have that, so the deeper the nesting the more likely the model loses track of brackets.
We see this especially with arrays of objects where each object has optional nested fields. For complex nested objects, the model can get all items well formatted but one with an invalid field of wrong type. That's why we put effort into the repair/recovery/sanitization layer — validate field-by-field and keep what's valid rather than throwing everything out.
Hardly matters, this isn't a problem that you'd have these days with modern LLMs.
Also, a model can always use a proxy to turn your tool calls into XML
And feed you back json right away and you wouldn't even know if any transformation did take place.
We do see fewer invalid JSONs on latest bigger LLMs but still can happen on smaller and cheaper models. There is also case when input is truncated or a required field not found, which are inherently difficult.
On XML vs JSON, I think the goal here is to generate typed output where JSON with zod shines - for example the result can type check and be inserted to database typed columns later
Thing is even with XML LLM will fail every now and then.
I've built an agent in both tool calling and by parsing XML
You always need a self correcting loop built in, if you are editing a file with LLM you need provide hints so LLM gets it right the second time or 3rd or n time.
Just by switching to XML you'll not get that.
I used to use XML now i only use it for examples in in system prompt for model to learn. That's all
Agreed - in this project I did a one path sanitation to recover invalid optional / nullable fields or discard invalid objects in nested array.
I know multi path LLM approaches exist: e.g. generating JSON patches
https://github.com/hinthornw/trustcall
This looks pretty interesting! I haven't used it yet, but looked through the code a bit, it looks like it uses turndown to convert the html to markdown first, then it passes that to the LLM so assuming that's a huge reduction in tokens by preprocessing. Do you have any data on how often this can cause issues? ie tables or other information being lost?
Then langchain and structured schemas for the output along w/ a specific system prompt for the LLM. Do you know which open source models work best or do you just use gemini in production?
Also, looking at the docs, Gemini 2.5 flash is getting deprecated by June 17th https://ai.google.dev/gemini-api/docs/deprecations#gemini-2.... (I keep getting emails from Google about it), so might want to update that to Gemini 3 Flash in the examples.
HTML -> markdown -> LLM is standard practice. We strip elements like aside, embed, head , iframe etc. the criteria is conservatively set to avoid removing too many elements (especially in extractMain mode)
https://github.com/lightfeed/extractor/blob/main/src/convert...
I have used gemma 3 and had good results.
Once Gemini 3 flash drops the preview suffix, will update the examples. Thank you for the pointer.
The extraction prompt would need some hardening against prompt injection, as far as i can tell.
> Avoid detection with built-in anti-bot patches and proxy configuration for reliable web scraping.
And it doesn't care about robots.txt.
Good point. The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — things like CDP leak fixes so Cloudflare doesn't block you mid-session. It's not about bypassing access restrictions.
Our main use case is retail price monitoring — comparing publicly listed product prices across e-commerce sites, which is pretty standard in the industry. But fair point, we should make that clearer in the README.
robots.txt is the most basic access restrictions and it doesn't even read it, while faking itself as human[0]. It is about bypassing access restrictions.
[0]: https://github.com/lightfeed/extractor/blob/d11060269e65459e...
Regardless. You should still respect robots.txt..
We do respect robots.txt production - also scraping browser providers like BrightData enforces that.
I will add a PR to enforce robots.txt before the actual scraping.
How can people believe that you are respecting bot detection in production when your software's README says it can "Avoid detection with built-in anti-bot patches"?
> It's not about bypassing access restrictions.
Yes. It is. You've just made an arbitrary choice not to define it as such.
I will add a PR to enforce robots.txt before the actual scraping.
My instinct was also to use LLMs for this, but it was way to slow and still expensive if you want to scrape millions of pages.
Put things to perspective - Gemini 2.5 flash is 0.3/1M tokens - assuming each page is 700 tokens and output is not much you are looking at $210 for 1M pages
You will absolutely struggle to get all the info you need into 700 tokens per page.
Edit: There's also the added complexity of running a browser against 1M pages, or more.
My platform has 24M pages on 8 domains and these NASTY crawlers insist on visiting every single one of them. For every 1 real visitor there are at least 300 requests from residential proxies. And that's after I blocked complete countries like Russia, China, Taiwan and Singapore.
Even Cloudflares bot filter only blocks some of them.
I'm using honeypot URLs right now to block all crawlers that ignore rel="nofollow", but they appear to have many millions of devices. I wouldn't be surprised if there are a gazillion residential routers, webcams and phones that are hacked to function as a simple doorways.
Things are really getting out of hand.
What crawlers are using residential proxies?
What's your experience with not getting blocked by anti-bot systems? I see you've custom patches for that.
The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — fixing CDP leaks, removing automation flags, etc. For sites behind Cloudflare or Datadome, that alone usually isn't enough — you'll need residential proxies and proper browser fingerprints on top. The library supports connecting to remote scraping browsers via WebSocket and proxy configuration for those cases.
As someone who is getting HAMMERED TO NO BELIEVE by residential proxies, I just want to express my hatred to all of you.
Curious. Care to share more? What approaches have you tried?
https://news.ycombinator.com/edit?id=47528370
This feels like slop to me.
It may or may not be, but if you want people to actually use this product I’d suggest improving your documentation and replies here to not look like raw Claude output.
I also doubt the premise that about malformed JSON. I have never encountered anything like what you are describing with structured outputs.
In context of e-commerce web extraction, invalid JSON can occur especially in edge cases, for example:
price: z.number().optional() -> price: “n/a”
url: z.string().url().nullable() -> url: “not found”
It can also be one invalid object (e.g. missing required field, truncated input) in an array causing the entire output to fail.
The unique contribution here is we can recover invalid nullable or optional field, and also remove invalid nested objects in an array.
Robots.txt anyone?
Good point. The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — things like CDP leak fixes so Cloudflare doesn't block you mid-session. It's not about bypassing access restrictions.
Our main use case is retail price monitoring — comparing publicly listed product prices across e-commerce sites, which is pretty standard in the industry. But fair point, we should make that clearer in the README.
https://news.ycombinator.com/item?id=47340079
Regardless. You should still respect robots.txt..
> comparing publicly listed product prices across e-commerce sites
Those prices and information is for the public viewers, the reason why some people have ROBOTS.txt for example is to reduce the traffic load that slop crawlers generate. The bandwidth is not free so why would you assume to ignore their ROBOTS.txt when you're not footing the bill ?