What would be the cost for you to self host and serve locally on a network? It's potentially more cost effective to aggregate resources to host "on prem" vs the ongoing SaaS costs.
Yes. I have seen examples of servers for developing countries that host local copies of Wikipedia and other open data sets, as well as an aggressive caching proxy for low bandwidth internet uplinks. Same idea, but local serving of LLM inference.
You're probably going to end up switching around for daily limits.
Just note that no one is paying the real cost of AI, if they were, it would be plain to see that hiring a human is way cheaper. So burn that M$FT and VC money while you can.
Sorry, my point is that the free limits on these services are being paid for by VC and M$FT investments. Its free now because they are trying to capture the market and build better services while it is still early in the AI lifecycle
You can try Lightning AI's model APIs https://lightning.ai/lightning-ai/models?section=allmodels&v...
What would be the cost for you to self host and serve locally on a network? It's potentially more cost effective to aggregate resources to host "on prem" vs the ongoing SaaS costs.
So you mean firing up ollama and them later if it scales. going for llm service?
Yes. I have seen examples of servers for developing countries that host local copies of Wikipedia and other open data sets, as well as an aggressive caching proxy for low bandwidth internet uplinks. Same idea, but local serving of LLM inference.
You're probably going to end up switching around for daily limits.
Just note that no one is paying the real cost of AI, if they were, it would be plain to see that hiring a human is way cheaper. So burn that M$FT and VC money while you can.
I do not have vc-money. Just the upkeep from my parent
Sorry, my point is that the free limits on these services are being paid for by VC and M$FT investments. Its free now because they are trying to capture the market and build better services while it is still early in the AI lifecycle