not really, no
Yes rate limits its also a api request
Made a ai bot with real time text generation through esit message and it wasnt hard to hit the rate limit lets say it so that needs to be handled in the code
How many requests can send via API?
check this: https://telegra.ph/So-your-bot-is-rate-limited-01-26
I heard the bot that wrote this article used GPTv1
no, worse :( it was a human :(
Wait i donr read anything regarding edit messages how is that calculated i also would find tjat interesting my current solution is a load balanced chunking of the message to edit so basicly dont edit for each word but the words are chunked and than edit in the chunk size is calculated by the load Is there a better soluton for it or even a way that i can edit for each individual chunk word?
an edit is the same as a send
Aaah okay thank you
So a custom load balancer is than tje only solution for something like that am i right?
do all the edits in one go if possible.
Than i would need to wait untiö the complete answer is generated which would completly remove the real time generation aspect sadly
Think about it this way: Every "edit" is a full send. if you edit a message 100 times, the you are sending a message 100 times. This can negatively affect users with limited data. I get wanting realtime data, but this is a bad way to go about it. That being said...and I hate saying this.... look into the newer webapps functionality.
Webapps is already integrated problem you cant make it as main solution clients like tg x dont allow it
And i dont think it is realy affecting users with a bad connection having a part of the answer already rather than needing to wait for the whole answer is in nearly every situation quicker even with a slow connection
Also if you want to shows the progress like 0% to 100% ,so I think "edit message" have to be unlimited or big limit than "send message" like 50 edit per second
Обсуждают сегодня