🌐
janitorai
help.janitorai.com › en › article › openrouter-error-guide-10ear52
OpenRouter Error Guide | janitorai
The limit resets at 7:00 PM (12:00 AM UTC). Either wait, or upgrade to lift the cap. That model is currently overloaded. Fix: Use a different model, or switch to a paid version. Something broke on the provider’s end. Fix: Check your model’s Uptime tab on OpenRouter.
🌐
OpenRouter
openrouter.ai › docs › quickstart
OpenRouter Quickstart Guide | Developer Documentation | OpenRouter | Documentation
OpenRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options. Get started with just a few lines of code using your preferred SDK or framework. Looking for information about free models and rate limits?
🌐
JanitorAI
janitorai.com › characters › 95dc37e1-f050-4bb8-ba2b-d198b154eaa1_character-open-router-tutorial
OpenRouter Tutorial
janitor is a platform for creators building immersive worlds and readers seeking living stories, we are where human creativity meets ai magic.
🌐
Reddit
reddit.com › r/janitorai_official › new openrouter limits
[Mature Content] r/JanitorAI_Official on Reddit: New Openrouter Limits
April 7, 2025 -

So a 'little bit' of bad news especially to those specifically using Deepseek v3 0324 free via openrouter, the limits have just been adjusted from 200 -> 50 requests per day. Guess you'd have to create at least four accounts to even mimic that of having the 200 requests per day limit from before.

EDIT: All free models (even non deepseek ones) are subject to the 50 requests per day limit. And for further clarification, say even if you have say $5 on your account and can access paid models, you'd still be restricted to 50 requests per day (haven't really tested it out but based on the documentation, we need at least $10 so we can have access to higher request limits)

🌐
GoProxy
goproxy.com › blog › proxy-error-429-janitor-ai
Proxy Error 429 in Janitor AI: Causes, Fixes & Long-term solutions
In Janitor/OpenRouter stacks, provider/plan limits are more common causes. If you use casually, expect occasional 429s with free models — your quickest win is switching models or adding a small credit/top-up. If you rely on Janitor AI for repeatable or production workflows, invest in a direct provider account and build standard rate-limit handling (inspection of headers + exponential backoff + fallbacks).
🌐
OpenRouter
openrouter.ai
OpenRouter
Introducing the 2025 State of AI report, in partnership with a16z.
🌐
ABCProxy
abcproxy.com › blog › how-to-use-openrouter-on-janitor-ai.html
How to use OpenRouter on Janitor AI?
May 10, 2025 - This article explains how OpenRouter and Janitor AI work together, and provides step-by-step instructions on how to achieve efficient integration and resolve network limitations through proxy IP services such as abcproxy.
🌐
JanitorAI
janitorai.com › characters › 0f45ccef-44f2-4785-b1d1-ad2563b567f0_character-a-simple-and-straightforward-guide-on-how-to-use-proxy-openrouter-only
Janitorai
janitor is a platform for creators building immersive worlds and readers seeking living stories, we are where human creativity meets ai magic.
Find elsewhere
🌐
Reddit
reddit.com › r/janitorai_official › deepseek proxy using openrouter -- tutorial
[Mature Content] r/JanitorAI_Official on Reddit: DEEPSEEK Proxy using openrouter -- tutorial
July 9, 2025 -

i'll try to be fast and precise, first of all. create an Openrouter account, then go to KEYS and CREATE AN API KEY.

MAKE SURE TO COPY THAT KEY BECAUSE IT WILL BE THE ONLY TIME YOU WILL SEE IT ONCE YOU CREATE IT. Usually it starts whit sk-or...

Good, now get back to janitor.ai, go to the proxy setting and write like this.

Model ---> deepseek/deepseek-chat-v3:free

URL ---> https://openrouter.ai/api/v1/chat/completions

API KEY ---> YOUR API KEY

check if everythings good and you should be good to go!

🌐
Reddit
reddit.com › r/janitorai_official › update for openrouter users, especially for the free tier and those who paid $10
[Mature Content] r/JanitorAI_Official on Reddit: Update for Openrouter users, especially for the free tier and those who paid $10
July 20, 2025 -

TDLR: Chutes will be throttling Openrouter's free tier (including those that paid $10) and prioritizing paying users from Chute. This is why there's been a number of errors lately for those not in the loop! This doesn't mean you can't use it, it'll just be slower.

Haven't seen anyone post an update about this on Reddit and wanted to share.

🌐
Jan
jan.ai › docs › desktop › remote-models › openrouter
OpenRouter
Ensure your API key has sufficient credits. OpenRouter credits work across all available models.
🌐
Sonusahani
sonusahani.com › blogs › fix-proxy-error-429-janitor-ai
How to Fix Janitor AI Proxy Error 429 (Rate Limit Exceeded)
November 19, 2025 - Getting “Rate limit exceeded – code 429” on Janitor AI? It’s the 50‑message/day cap on free models via OpenRouter. Fix it fast: wait for reset, switch to paid models, or add your own API key.
🌐
OpenRouter
openrouter.ai › docs › faq
OpenRouter FAQ | Developer Documentation | OpenRouter | Documentation
Otherwise, you will be rate limited to 50 free model API requests per day.
🌐
Jan
jan.ai › docs › remote-models › openrouter
OpenRouter - Jan
Ensure your API key has sufficient credits. OpenRouter credits work across all available models.
🌐
JanitorAI
janitorai.com › characters › e3c79e1a-30a1-48dc-859b-96a0153d742b_character-how-to-use-proxy-openrouter-guide
HOW TO USE PROXY? (OPENROUTER GUIDE)
janitor is a platform for creators building immersive worlds and readers seeking living stories, we are where human creativity meets ai magic.
🌐
Reddit
reddit.com › r/janitorai_official › a complete beginner's guide to janitorai and using openrouter (proxy llm's)
[Mature Content] r/JanitorAI_Official on Reddit: A Complete Beginner's Guide to JanitorAI and Using OpenRouter (Proxy LLM's)
December 4, 2024 -

Edit 4/12/24: Unfortunately some of this is outdated, mostly the proxy stuff, the proxy link will work but you need to click the link you generate with the colab (the random words one) and find the list of chat completions. Pick the OpenRouter one and paste that into the url section on JanitorAI. I’ve found a new much easier way and will try to make another guide soon. Most of this info is still good. Thanks!

Edit 9/22/25: Easiest and quickest way to use a proxy on OpenRouter is now to make it your default in your settings. Go to the model’s page and copy its name, paste that in your proxy settings model name on Janitor, your API key, then paste this:

https://openrouter.ai/api/v1/chat/completions

into your proxy url and you should be good to go.

JanitorAI is a free site where people talk to chatbots powered by Large Language Model's (LLM's) and designed by other users. You can do this for fun, as a creative writing exercise, or for NSFW erotic roleplay (ERP). If you've never spoken to an LLM before you'll be amazed at how naturalistic the conversations are. Janitor does not use your local hardware to run its LLM’s and is very mobile friendly.

This is a compilation of all the info I've found after hours of research. Most people make a lot of assumptions about what people know so this guide is written for a total beginner who literally knows nothing about coding or LLM's. It is long, and very thorough. If you’re a seasoned user you can skip to the proxy section.

To begin with you need to know that LLM's are, to oversimplify, programs that are fed enormous amounts of data and learn how to respond to instructions. Think of them like an advanced autocomplete.

So then bots are created by people giving the LLM a description of what kind of character to roleplay as. They can include things such as what the character's world is like, what they like/dislike, how they speak; and the LLM will use this information to guide its responses. Basically, you can create any character you can think of. Lots of people even base their bots off characters from anime or books. This is not a character creation guide. If you're interested in creating your own bots I highly recommend this guide: https://rentry.co/ravens-bot-guide

- Setup

Begin by creating your account at https://janitorai.com/register If you just want to get to the good stuff you can click on a bot from here and begin chatting. Try it out if you want to see if JanitorAI is for you and come back here when you want to optimize your experience.

You should be at least 18 if you are registering as there is a lot of NSFW content here. The guidelines can be found on the bottom of the page. Basically be 18, don't be a dick, and don't be racist or a pedo.

First click your profile in the upper right corner and go to my personas. Create a new persona. I recommend a one letter name to save on tokens (I’ll explain those later). Your avatar photo is what will appear by your name when chatting. Put “{{user}}= YOURNAME” in Appearance. You can also put things like your gender, appearance, or anything you want a bot to know about you here. I recommend keeping it short and sweet. Only put things here that will be relevant to roleplaying. Anything else just keep as your headcanon. And don’t mention things that a bot shouldn’t know until you tell them like your personal feelings as the bot will act like it knows everything here. Example format:

{{user}}= T
T= male
Traits= tall, athletic, wears glasses

Once you login if you scroll down to the explore section you will see a list of categories and bots. Right under explore you will notice two tabs that say 'All' and 'Limited'. Limited bots are bots that are not designed for erotic roleplay, though note that they often end up doing it anyways as JanitorLLM is a horny bastard. If you just want to just talk to your bots and not bang them go here.

Select a bot that appeals to you and read their profile page as it will contain a description of the bot, trigger warnings, and how you will fit into the scenario. Click the heart under the bot’s picture to add them to your favorites list. Remember that since an LLM is an autonomous program it can write things the bot creator did not expect so there may be triggers that cannot be predicted. Right above the chat button you will see a list of tags that give a general overview of what to expect from the bot. If it has one called proxy with a green check mark that means it allows proxies, not every bot does. The dead dove tag means that this bot may contain disturbing content such as suicide, rape, murder, etc.

On the right side of the page you can read the character’s definitions. On some bots it may be hidden but if it’s not it will include:

Scenario- This is the setup. For example, “This bot is a cop and they’ve just arrested you.”

Personality- Who the bot is as a person, where they’re from, traits they have, how they look, etc.

First Message- The first message from the bot that you will respond to. This first message is CRITICAL in shaping how the bot writes. If the first message is short its responses will be short. If the bot describes what the user is doing in the first message it’s much more likely to do it again later.

Example Dialogs- These are responses the creator writes for the bot in order to give it an idea of how it should be responding.

Under these are public chats. These are chats other users have had with the bot and made public. The Janitor devs disable this feature sometimes.

You can read all these if you want, though personally I think it takes some of the mystery out of getting to know the character.

- Chat Settings

Click the chat with character button. In the upper right corner you’ll see three lines, click those. Here we have:

-API Settings. An API, or Application Programming Interface, is software that translates what you say into something the LLM can read and vice-versa. Make sure you’re on JanitorLLM Beta. Click on advanced prompts. These are extra instructions you can give the LLM to tailor its responses to you. Here’s a full guide: https://rentry.org/kolach3prompts

Copy the prompt you wish to use from it. I recommend the full prompt. Paste that into the Custom LLM Prompt box. As you become more comfortable with chatting and learn how the LLM thinks you can edit this prompt more to your personal preferences. Save and go back to settings.

-Generation Settings. Factors that influence how the LLM generates its responses. The three factors are:

Temperature- The bot’s creativity. I usually stay at 1.3. Anywhere from .8-1.3 is usually fine. If you go too high the bot will give nonsense answers. Too low and the bot will be boring. If the bot is repeating itself turn the temp up. If it’s not giving logical answers turn it down. I recommend turning it up for NSFW scenes.

Max New Tokens- This is the number of tokens a bot is allowed to respond with. Tokens are basically what LLM’s turn text into to read it. It’s complicated and different per LLM but on average one word is 3-4 tokens. You can use sites like this: https://platform.openai.com/tokenizer to see how many tokens a word is.

If you want shorter answers lower this, but your bot’s responses will stop mid-sentence. I keep it at 0, which is no token limit. If you want shorter replies your best best is to keep editing the bot's messages down.

-Context Size. The number of tokens (i.e. information) the LLM can consider in its responses. There are permanent and temporary tokens. Permanent tokens are things it will always take into consideration. These are your persona info, your custom API prompt, the scenario, its personality, and chat memory. Temporary tokens are the example dialogs, first message and all subsequent messages from you and it. On a bot's page under character definition it will tell you how many permanent and temporary tokens the bot starts out using. When the context memory is full the oldest temporary tokens will be purged. Meaning it will forget what was said. So if you told the bot your shirt was blue at the beginning of your chat now it has no idea. This setting should always be at max unless you want your bot to have Alzheimer’s.

JanitorLLM’s context memory is 9,001 tokens. A decent bot usually uses 500-1500 tokens so this plus your persona and custom prompt can leave you with less than 8,000. This is pretty small. To put that into context I can use 3-5 thousand tokens in 5-20 minutes. That’s why I avoid using extraneous words and create a nickname for the bot that is one letter or syllable. When the context memory fills up the bot will start to behave strangely. It ends every response with this flowery over sentimental chapter ending like, “And they knew, that the roughest days of their lives were ahead of them, but with each other at their side they could handle anything”. Some people continue but I don’t know how. The writing quality degrades significantly.

-Chat Memory. This is like your adventure log. If your context is full, you can use this to record things you want your bot to remember. Format it something like this:

EVENTS IN SEQUENTIAL ORDER:

  1. User and bot met and decided to be friends

  2. They slew a dragon

  3. Bot and User fell in love. They’re now dating

NOTABLES:
Bot is secretly afraid to tell user their true feelings
Bot now is scared of heights Bot has a 1000 gold

There is an auto summary function. But it's not very good. You can use it as a rough draft though and edit it down. Also notice I left out periods at the end. You should save tokens whenever you can. As long as the LLM can read it you're fine. And don't use those extra returns and spaces before the numbers. That's just reddit formatting that won't listen to me. Spaces and returns count as tokens.

Chat memory can be used for anything you'd like the bot to always consider. I’ll include stuff like directions for the LLM such as, “Make sure {{char}} tries to flirt with {{user}}” or “{{char}} speaks in Shakespearean English .”

When context memory is full you can reset it and get rid of the bad writing by summarizing the conversation in the chat memory then transferring that information to a new chat’s chat memory. Then I’ll start my first response with, “[Disregard the current first message. This chat is a continuation of a previous chat with {{char}}. Use the chat memory to understand what's happened so far.]” But you really shape the bot’s responses as you chat and starting a new chat kind of feels like you killed the personality you'd developed. That’s why different LLM's can be better. Their context memory can go up in the hundreds of thousands of tokens.

-Customize. You can change the background image of your chat and some other settings. Some creators will include a picture in the bot’s profile to use as the background.

-Immersive Mode. This removes some of the icon clutter in the chat. If you can’t edit or delete messages this is on.

-Public Chat. When this is enabled your chat will be posted on the bot’s page and anyone can read it. This gets disabled by the Janitor devs sometimes.

- Chatting

Finally! The good stuff. So now you can start writing your messages. I recommend writing in the third person past tense as the bot seems to follow it better. The bot usually can pick up whether you’re speaking or narrating but try to use quotations marks and returns to help it. For narration you can use asterisks around the text to grey it out, but I find this doesn’t really matter and just makes typing harder. Double asterisks around a word will make it bold. And remember, the LLM doesn't read like a human. The, “Why waste time say lot word when few word do trick" technique works. As long as the LLM can parse your meaning it's fine most of the time.

The bot is trying to write a story with you and its responses will depend on the quality of yours. If you just say “Yep.” It’s going to have a hard time coming up with a decent response and often will start to write for you just to have something to say. The bot is most likely to do this when they just can't think of a way to respond without having you perform an action so try to at least give them a question, action, or event they can respond to. Most people don't like it when the bot writes for their character but I enjoy it when I’m feeling lazy. They start doing it a lot after the context memory is full and it's almost impossible to get them to stop.

You can write about literally anything so go wild. Change the laws of nature, give yourself superpowers, make yourself irresistible to the opposite sex, experience things you never could in real life. If you haven’t spoke to an LLM before you’ll be amazed at how they can handle anything you throw at them.

Tips, Tricks, and Tools:

If the bot keeps doing something you don’t like try writing something about it in your advanced prompts or chat memory. Like [{{char}} never hits {{user}}.]

You can click the bot’s picture to create a bigger window of it.

Using {{user}} or {{char}} for your persona and the bot respectively can make things more clear to the LLM. If you type that in chat it will automatically be replaced by their names. I don’t think this matters as much as people say. Using names works fine.

Use double parentheses or brackets around text when you want to speak to the LLM directly. For example, [Respond with three paragraphs. {{char}} wants to express their feelings but is afraid to.] This will guide the LLM’s response.

You can create characters mid chat. Either include a description of them in the chat memory or write [You will now roleplay and respond as both {{char}} and NEWGUY. NEWGUY is BLAHBLAHBLAH.] JanitorLLM kind of struggles with this though.

Edit messages with the pencil icon. Use this often. If the bot is going in a direction you don’t like nip that in the bud here because it will continue. The bot heavily relies on its old messages to inform its new ones so if you edit their messages to be short their replies will be shorter.

If you don’t like a response, you can click the single arrow at the bottom to generate a new one. The old responses are still saved so you can select the one you like best. Turn the temp up if it’s not different enough. The double arrow at the top will continue the current response.

You can rate messages to tell the bot whether you like how it's responding or not. I’ve had mixed results with this. It doesn’t seem to do much to me. But be careful with five and one stars as it can steer the bot too heavily in that direction.

If you want slow burn romance try the limited bots. They can still be horny but they have to be coaxed into it more. Limitless bots will try to jump you constantly.

If you leave a negative review please include constructive criticism and remember that the bot creator is not responsible for JanitorLLM’s idiosyncrasies. Don't be SuperVegito. Creators can delete reviews and block you which means you’ll lose access to their bots.

- Proxies

While JanitorLLM is great (and free), it has many limitations. The small context. The fact that it’s trained off fanfiction and is horny all the time. It only uses a few hundred million parameters (with LLM’s more parameters generally equals better) while LLM’s like GPT-4 are using a trillion plus. The writing uses a lot of tropes and after you use it a while you start to see the same patterns over and over.

So that’s where proxies come in. These are sites that give you access to hundreds of LLM’s. I recommend using JanitorLLM until you get tired of the low context and being owned "mind, body, and soul", because it's going to be hard to go back after and it'll teach you to be conservative with your tokens before you have to start paying for them.

Depending on the LLM the increase in quality is enormous. They will speak less for you, don't repeat themselves as much, and have more naturalistic speech. They’re more logical and creative. They understand directions better. They don't try to get in your pants constantly. The main drawback is that at times they’re not horny enough if that’s what you’re looking for. They can be too serious and hard to draw into some of the sillier antics JanitorLLM is down for.

There are many sites but I’ll be explaining how to set up OpenRouter, one of the most popular ones. This explanation should help you understand the basic concept though should you choose another one. Here’s a guide but I’ll also summarize it below: https://docs.google.com/presentation/d/1rJuU6o1PfHYVqY_RcdOWvcoH_fVJMuwm6IIa7S1r-3M/edit#slide=id.p

1. Set up an account at https://openrouter.ai/ You will get around 19,000 tokens worth of messages for free just by starting an account. It took me a few hours to use this up.

2. Click the the three horizontal bars in the upper right corner and go to keys. Click create key. Save this somewhere as you won’t be able to see it again once you exit out.

3. Click models in the upper right corner. On the left hand side click text-to-text. Keep context length at 4k. Prompt pricing FREE to $10. Then click roleplay. Under resets filters hit the chart icon. These are your options with their prices and context sizes. If you want NSFW content make sure the model is unmoderated unless you plan on jailbreaking it (see addendum). I think the b at the end of models means something like how many parameters the LLM uses. Generally the higher the b the better the LLM. So Llama 3.1 70b is better than 8b.

I highly recommend Wizard-2 8x22b. Llama 3.1 70b Instruct is good too and way cheaper but it occasionally tries to speak for you and gets stuck in loops. It's not very good at the sex stuff either but Wizard goes HARD on the NSFW scenes.

The prices are hard to contextualize but I was using Wizard at $0.5 per million tokens input and output and I probably spent about $0.75 over 2+ hours. There are lots of free versions models such as Llama 405b but they only have 4,000-8,000 context size and are not designed for roleplay so they can be hit or miss. You can use them to see if you like the LLM's writing style before paying.

4. Click the horizontal bars in the upper right again and go to settings. Scroll down and select whichever model you chose as your default model.

5. Now open this page: https://colab.research.google.com/drive/1IRY1EU5cg87oUeOrIhmRSYpbJx_1wYN9#scrollTo=J79iSWaeBxUH and scroll down till you see six sliders. These are complicated but this is what works for me:

min_p: 0.1
top_p: 1
top_k: 0
repetition_penalty: 1
frequency_penalty: 0
presence_penalty: 0

You will have to reset them when you rerun the program. I don’t mess with these because I don’t understand them that well. I got some of this from this guide: https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/

EDIT: Our lord and savior Hibiki has graced us with a new proxy link: https://colab.research.google.com/github/4e4f4148/janitor-proxy-suite/blob/main/jai-proxy-suite.ipynb This one works the exact same way and the old one will not be updated so maybe stick to this one instead. I'll keep the old link up for posterity's sake.

6. Now return to the top of the page and hit the play button. Hit run anyway. Scroll down to the code that’s running. At the bottom you should see a link that looks something like this:

http s://RANDOMWORD-RANDOMWORD-RANDOMWORD-RANDOMWORD.trycloudflare. com http s://RANDOMWORD-RANDOMWORD-RANDOMWORD-RANDOMWORD.trycloudflare. com

Leave this site open and running. It’ll shut down occasionally and you might have to generate another link.

7. Open a JanitorAI chat. Go to your API settings and hit proxy. Use custom model and leave it blank. In the Other API/proxy URL section paste the link you just created. Under API Key paste the key you created after you made your OpenRouter account. Save settings and hit no when asked if you want to return to default settings. Refresh the page and open your API settings then press Check API key. A green box should popup saying you’re good to go and then you are! Congratulations! You are about to be completely addicted to JanitorAI.

Note there’s a custom prompt section on this page that works just like the one for JanitorLLM. You can use your prompt from before but since this LLM can handle more tokens you can make it even more in depth. I’d post mine but it’s ridiculously long.

- Addendum: Jailbreaking

Some LLM's are programmed with guidelines in order to prevent them from creating any inappropriate content. Jailbreaks are prompts designed to convince the LLM to ignore those guidelines. The LLM's creators are constantly trying to prevent this so old jailbreaks will cease to work over time. While I imagine it's rare you can get banned from using an LLM for doing this so proceed at your own risk. Here's a primer: https://www.confident-ai.com/blog/how-to-jailbreak-llms-one-step-at-a-time

I hope this guide was helpful as it took me hours of scouring the internet to find all this on my own. It's wild how hard info on this stuff is to find. I’m no expert so if anything is incorrect please let me know and I’ll correct it. And please feel free to share it with anyone you’re trying to introduce to Janitor or chatbots in general. I'm hoping this becomes the guide that comes up when you search for help because I really tried to include everything I could think of. Here's a masterlist of a bunch of other useful guides as well: https://www.reddit.com/r/JanitorAI_Official/comments/1fxlltq/m00nprincess_janitorai_guide_tutorial_masterlist/

Happy chatting!

🌐
Luna Proxy
lunaproxy.com › blog › how-to-use-openrouter-on-janitor-ai.html
Janitor AI Tutorial: How to Use OpenRouter Effectively
You are no longer bound by a limited set of options; you are now the director, with a world-class cast of diverse and powerful AI chat models at your command. By following this Janitor AI tutorial, you have learned how to create and fund your account, generate your API key, and correctly configure the settings. With the advanced tips on model selection and usage monitoring, you are now fully equipped to use OpenRouter ...
🌐
Reddit
reddit.com › r/janitorai_official › openrouter: $10 for 1000 free messages, for a year or forever?
[Mature Content] r/JanitorAI_Official on Reddit: Openrouter: $10 for 1000 free messages, for a year or forever?
July 3, 2025 -

According to Openrouter's FAQ, If you have purchased at least 10 credits, the free models will be limited to 1000 requests per day. It says "purchased at least 10 credits," not have at least 10 credits on my account. So, if that 10 credits expire after a year, or if I used some of it, do I still get the 1000 free limit? Have anyone tried? Because that would clearly be superior to Chutes' $5 for 200.

🌐
Robo Rhythms
roborhythms.com › janitor-ai-proxy-error-429
Janitor AI Users Keep Seeing Proxy Error 429 with DeepSeek on OpenRouter » Robo Rhythms
October 9, 2025 - The issue happens because DeepSeek on Janitor AI is powered through OpenRouter, and the main provider behind it, Chutes, prioritizes its own paying customers first. That means even if you pay for OpenRouter, you may still get rate-limited when traffic is heavy.