Looks like a cool search engine! Hadn't heard about it before.
But the search page says "Simple technology, no AI". With this change, that is no longer true though, is it? Of course the definition of "AI" is extremely vague. Once upon a time, A-star search was considered AI after all.
This was a very meandering project, and trying to corral it into some sort of coherent narrative was a bit of an undertaking on its own. Hopefully it makes some sense.
Hi Viktor! Really cool write-up, thanks! Uruky is already using the `nsfw` param, but set to `0` or `1`, and I see in your example this looks like a new value option (`2`) that's "better" than `1`? How "safe" is it to implement it as the value to send when someone wants SFW results?
Wouldn't work very well, in that you'd awful recall.
The way the filter is implemented, it runs after the query has been executed. I'd have to run it at document processing time, code in a pseudo-keyword for the label, and then add that to the query.
It's doable, but I question whether the juice is worth the squeeze.
Does marginalia_nu not use embedding models as part of search? I guess I assumed it would. If you have embeddings anyway, decision trees on the embedding vector (e.g. catboost) tend to work pretty well. Fine-tuning modernbert works even better but probably won't meet the criteria of "really fast and run well on CPUs". That said, the approach described in the article seems to work well enough and obviously provides extremely cheap inference
It does not use any transformer models right now. I've made experiments with BERT-adjacent methods, but not found them fast enough to be useful. Basically, whatever approach is used, it needs to do inference at ~10us latencies to either make real-time result filtering viable, or <1ms not add unreasonable overhead to processing-time result labeling.
Have you seen many examples of websites labeling themselves, perhaps using rating meta tags (<meta name="rating" ...>)? Self-labeling seems valuable in some ways, but I don't think I've seen it catch on.
Looks like a cool search engine! Hadn't heard about it before.
But the search page says "Simple technology, no AI". With this change, that is no longer true though, is it? Of course the definition of "AI" is extremely vague. Once upon a time, A-star search was considered AI after all.
This was a very meandering project, and trying to corral it into some sort of coherent narrative was a bit of an undertaking on its own. Hopefully it makes some sense.
Hi Viktor! Really cool write-up, thanks! Uruky is already using the `nsfw` param, but set to `0` or `1`, and I see in your example this looks like a new value option (`2`) that's "better" than `1`? How "safe" is it to implement it as the value to send when someone wants SFW results?
0 disables all filtering
1 filters 'harmful' sites per the UT1 blacklists
2 is 1 + the new NSFW filter.
The new filter works pretty good in my assessment. It's not infallible, but it gives significantly cleaner results.
And if you do find queries it fails to sanitize, I'd love to hear about them.
Thanks, already implemented and tested a couple of queries and it does look good!
Can you add 3, which only returns content flagged as NSFW?
So I can make sure I know what sites to stay away from, of course
Wouldn't work very well, in that you'd awful recall.
The way the filter is implemented, it runs after the query has been executed. I'd have to run it at document processing time, code in a pseudo-keyword for the label, and then add that to the query.
It's doable, but I question whether the juice is worth the squeeze.
Or perhaps -2
Does marginalia_nu not use embedding models as part of search? I guess I assumed it would. If you have embeddings anyway, decision trees on the embedding vector (e.g. catboost) tend to work pretty well. Fine-tuning modernbert works even better but probably won't meet the criteria of "really fast and run well on CPUs". That said, the approach described in the article seems to work well enough and obviously provides extremely cheap inference
It does not use any transformer models right now. I've made experiments with BERT-adjacent methods, but not found them fast enough to be useful. Basically, whatever approach is used, it needs to do inference at ~10us latencies to either make real-time result filtering viable, or <1ms not add unreasonable overhead to processing-time result labeling.
Have you seen many examples of websites labeling themselves, perhaps using rating meta tags (<meta name="rating" ...>)? Self-labeling seems valuable in some ways, but I don't think I've seen it catch on.
Meta tags are almost universally garbage, but the presence of '18 USC 2257' (or U.S.C.) is a very strong NSFW signal.
Does this comment make this page NSFW on Marginalia?