You're really not thinking with portals. The power of LLMs is not that they outperform highly specialized tools in very niche applications (although, with the correct system instructions, input variables, prompting techniques, and parameters -- I'm convinced they CAN outperform MANY specialized tools). The power of LLMs like GPT4 is that they can act as as "brain" in any repetitive business process that used to require a human, AND they will outperform 80% of the people doing this work. Think of ANY recurring business process, and you'll find tasks that can be almost completely automated by a simple python workflow with a few API calls to OpenAI. Marketing: Competitor / market research Scraping, summarizing, analyzing, generating reports Social media posts Idea generation Copy Scheduling Community engagement Commenting / reposting / engaging on platforms Content marketing Brainstorming / content strategy creation Content drafting Content editorial Advertising Ad creation Optimization (e.g. Loop in Google Ads API and have different GPT4 workflows doing everything from analyzing search queries to adding negative keywords). Email marketing Lead scraping + filtering Domain warming Lead research / profile scraping Completely personalized outreach emails based on scraped social profiles Sales: Sales development activities Email Social Phone messages CRM sanitation / upkeep Filtering / triaging leads Automatic transcription + summaries + to-dos based on conversations Automated follow-ups Read CRM notes & followup with all inactivate prospects with personalized messages Sales Strategy Analyzing notes / conversations and generating a personalized pitch Transcribing calls and analyzing buyer sentiment Service / Support: Chatting with an embedded knowledge base for: Customer service queries Technical support Summarizing / analyzing agent conversations Drafting messages / responses to customers Automatically building / expanding knowledge base based on user queries Technical / Dev Data pre-processing Unit testing Chatting with your codebase Translating user requirements into technical requirements / Jira tickets Sprint planning / DOR & DOD drafting Translating technical documentation into user language Drafting technical documentation This is all just the tip of the iceberg. Virtually any repetitive process that was difficult to automate because it required a competent brain as a checkpoint is now ripe for automation. Answer from Phantai on reddit.com
🌐
Reddit
reddit.com › r/openai › what are people using the openai apis for?
r/OpenAI on Reddit: What are people using the OpenAI APIs for?
September 17, 2023 -

It seems everyone is head down coding on open-ended tools / infra like langchain, vector DBs, or even ChatGPT itself.

I’m curious, how are businesses using LLMs? It seems for 99% of use cases building a smaller, focused model would be the best way to solve a real business problem.

Top answer
1 of 41
44
You're really not thinking with portals. The power of LLMs is not that they outperform highly specialized tools in very niche applications (although, with the correct system instructions, input variables, prompting techniques, and parameters -- I'm convinced they CAN outperform MANY specialized tools). The power of LLMs like GPT4 is that they can act as as "brain" in any repetitive business process that used to require a human, AND they will outperform 80% of the people doing this work. Think of ANY recurring business process, and you'll find tasks that can be almost completely automated by a simple python workflow with a few API calls to OpenAI. Marketing: Competitor / market research Scraping, summarizing, analyzing, generating reports Social media posts Idea generation Copy Scheduling Community engagement Commenting / reposting / engaging on platforms Content marketing Brainstorming / content strategy creation Content drafting Content editorial Advertising Ad creation Optimization (e.g. Loop in Google Ads API and have different GPT4 workflows doing everything from analyzing search queries to adding negative keywords). Email marketing Lead scraping + filtering Domain warming Lead research / profile scraping Completely personalized outreach emails based on scraped social profiles Sales: Sales development activities Email Social Phone messages CRM sanitation / upkeep Filtering / triaging leads Automatic transcription + summaries + to-dos based on conversations Automated follow-ups Read CRM notes & followup with all inactivate prospects with personalized messages Sales Strategy Analyzing notes / conversations and generating a personalized pitch Transcribing calls and analyzing buyer sentiment Service / Support: Chatting with an embedded knowledge base for: Customer service queries Technical support Summarizing / analyzing agent conversations Drafting messages / responses to customers Automatically building / expanding knowledge base based on user queries Technical / Dev Data pre-processing Unit testing Chatting with your codebase Translating user requirements into technical requirements / Jira tickets Sprint planning / DOR & DOD drafting Translating technical documentation into user language Drafting technical documentation This is all just the tip of the iceberg. Virtually any repetitive process that was difficult to automate because it required a competent brain as a checkpoint is now ripe for automation.
2 of 41
22
It seems for 99% of use cases building a small model would be the best way to solve a real business problem. That assertion deserves some evidence/argumentation. I don't see why you think that's true.
🌐
Reddit
reddit.com › r/artificialinteligence › your experience with openai apis and how did you learn?
r/ArtificialInteligence on Reddit: Your experience with OpenAI APIs and how did you learn?
May 12, 2024 -

Hey everyone! Been diving deeper into AI stuff, and lately I’m interested about OpenAI's APIs. Anyone else trying them out? How do you fit them in your projects?

Came across this course on using OpenAI to make AI products. Not sure if I wanna jump in.

Anyone checked out something similar? Worth it for someone who’s kinda already got a grip on the tech side? Any free/paid alternatives you know about?

Would love to hear what you guys think

🌐
Reddit
reddit.com › r/openai › use. the. damn. api
r/OpenAI on Reddit: USE. THE. DAMN. API
January 25, 2024 -

I don't understand all these complaints about GPT-4 getting worse, that turn out to be about ChatGPT. ChatGPT isn't GPT-4. I can't even comprehend how people are using the ChatGPT interface for productivity things and work. Are you all just, like, copy/pasting your stuff into the browser, back and forth? How does that even work? Anyway, if you want any consistent behavior, use the damn API! The web interface is just a marketing tool, it is not the real product. Stop complaining it sucks, it is meant to. OpenAI was never expected to sustain the real GPT-4 performance for $20/mo, that's fairy tail. If you're using it for work, just pay for the real product and use the static API models. As a rule of thumb, pick gpt-4-1103-preview which is fast, good, cheap and has a 128K context. If you're rich and want slightly better IQ and instruction following, pick gpt-4-0314-32k. If you don't know how to use an API, just ask ChatGPT to teach you. That's all.

🌐
Reddit
reddit.com › r/learnmachinelearning › is everyone paying $ to openai for api access?
r/learnmachinelearning on Reddit: Is everyone paying $ to OpenAi for API access?
September 20, 2024 -

In online courses to learn about building LLM/ RAG apps using LlamaIndex and LangChain, instructors ask to use Open AI. But it seems, based on the error message that I get, that I need to enter my cc details to pay at least 5$ if not more to get more credits. Hence, I wonder if everyone is paying OpenAI while taking the courses or is there an online course for building LLM/RAG apps using ollama or alternatives.

Thank you in advance for your input!

🌐
Reddit
reddit.com › r/chatgptpro › do i need, openai api or can i get away with something free?
r/ChatGPTPro on Reddit: Do I need, OpenAI API or can I get away with something free?
May 26, 2024 -

I currently pay for pro because my needs are pretty simple. I just use it for help with work and some light coding projects I do from time to time.

I barely use it but when I do I usually get close to hitting the rate limit in the webUI. That might have changes since they just changed the limit I think but still the fact remains I used it 2-3 times a week for a few questions and then once very few months for a more complicated session. I was thinking I could save some money by just using the API.

I just setup a self hosted web interface to use with the API and threw in $20 and am going to test it this month to see if I top out my usage but it made me wonder maybe I can use a open source LLM. Not too sure how it works if I can train it myself on the reference material I would like it to help with. Mainly coding and scripting is what I use it for.

How I work. For coding projects I have GPT write almost all of the code. I do iterations see what works and what doesn’t then have it write the final application. I know how to code but GPT is just so much faster and it’s stupid imo to not just have it write the whole thing. I inevitably have to make tweaks and corrections but It works well enough for the type of projects I’m working on that are just for me. Not sure if any other language models can perform as well as GPT for coding. If there is one that is free or at least cheaper than $20 a month for the amount of usage I need then I’d love to hear about it.

🌐
Reddit
reddit.com › r/openai › open ai api costs me 1$?
r/OpenAI on Reddit: Open AI API costs me 1$?
October 6, 2024 -

I was looking to buy the open air API for my simple NLP classification problem.

Given the current price for chat gpt 4o 2.5$/1 M input tokens I have calculated that it would cost me less than 2$ a month to use the API?

My output is 3 class classification so the output cost is nearly next to nothing.

I feel like something is off..

Does anybody have any real life experience using their API?

🌐
Reddit
reddit.com › r/localllama › thoughts on openai's new responses api
r/LocalLLaMA on Reddit: Thoughts on openai's new Responses API
March 18, 2025 -

I've been thinking about OpenAI's new Responses API, and I can't help but feel that it marks a significant shift in their approach, potentially moving toward a more closed, vendor-specific ecosystem.

References:

https://platform.openai.com/docs/api-reference/responses

https://platform.openai.com/docs/guides/responses-vs-chat-completions

Context:

Until now, the Completions API was essentially a standard—stateless, straightforward, and easily replicated by local LLMs through inference engines like llama.cpp, ollama, or vLLM. While OpenAI has gradually added features like structured outputs and tools, these were still possible to emulate without major friction.

The Responses API, however, feels different. It introduces statefulness and broader functionalities that include conversation management, vector store handling, file search, and even web search. In essence, it's not just an LLM endpoint anymore—it's an integrated, end-to-end solution for building AI-powered systems.

Why I find this concerning:

  1. Statefulness and Lock-In: Inference engines like vLLM are optimized for stateless inference. They are not tied to databases or persistent storage, making it difficult to replicate a stateful approach like the Responses API.

  2. Beyond Just Inference: The integration of vector stores and external search capabilities means OpenAI's API is no longer a simple, isolated component. It becomes a broader AI platform, potentially discouraging open, interchangeable AI solutions.

  3. Breaking the "Standard": Many open-source tools and libraries have built around the OpenAI API as a standard. If OpenAI starts deprecating the Completions API or nudging developers toward Responses, it could disrupt a lot of the existing ecosystem.

I understand that from a developer's perspective, the new API might simplify certain use cases, especially for those already building around OpenAI's ecosystem. But I also fear it might create a kind of "walled garden" that other LLM providers and open-source projects struggle to compete with.

I'd love to hear your thoughts. Do you see this as a genuine risk to the open LLM ecosystem, or am I being too pessimistic?

Find elsewhere
🌐
Reddit
reddit.com › r/openai › newbie here: how does the api work?
r/OpenAI on Reddit: Newbie here: How does the API work?
October 17, 2023 - If you didnt set limits. Your api key is just like a password. The api will likely cost more than a 20 subscription if you use it enough. ... OpenAI is an AI research and deployment company.
🌐
Reddit
reddit.com › r/openai › controversial opinion on openai api
r/OpenAI on Reddit: Controversial Opinion on OpenAI API
March 14, 2025 -

So I've been working with OpenAI's API for a while now, and has anyone else noticed how it's basically turning into Langchain at this point? 😂

What started as a simple text-in, text-out API has exploded into this massive ecosystem:

  • Embeddings

  • Vector stores

  • Real-time streaming

  • Assistants API

  • Agents API

  • Web sockets

  • Function calling

  • Vision APIs

Don't get me wrong, all these features are powerful, but it's getting to the point where you need a PhD just to figure out which part of their API you should be using for your project.

Remember when we used to just import openai, set an API key, and call completion? Those were simpler times...

Now I'm wondering if I should even bother with third-party libraries anymore since OpenAI is just absorbing all their functionality anyway. Half expecting them to announce "OpenAI Chains" next week lol.

Anyone else feeling the feature bloat, or am I just being an old man yelling at clouds? What's your experience been like with all these new APIs?

Top answer
1 of 5
3
you need a PhD to figure out which part of their API you should be using for your project It's not nearly that bad. I just wrote my first project using their API a week ago (a multi-LLM query tool). The OpenAI part took me about 15 minutes, of which the first 10 minutes involved setting up an OpenAI account and buying credits. How did I do it? I searched for "openai api getting started" and followed the most basic example. Done. I even meter my usage by keeping track of input and output token counts and per-token rates. Sure, the API syntax is a little more convoluted than it needs to be. It's because it contains features that I don't need for my project so I'm not messing with them, and it's nice to know that they're there if I want to. The other LLM-based API I've used recently (ollama) has very similar issues. Sure, the documentation could be clearer or more consistent. As documentation for dynamic, in-process APIs goes, it's pretty normal and usable. Again, ollama. Sure, the API has a lot of discrete parts for different uses, and its sprawling nature adds complexity. Show me a sophisticated, multi-purpose API that doesn't do that. numpy, tensorflow, opencv, gcloud - they all have this same problem, some much worse than openai. Sometimes, a project just can't be simplified beyond a certain point. It's fine.
2 of 5
3
I was pretty mad after reading they plan to replace the Assistants API with the "Responses" API. As a layman, it took me forever to wrap my head around Assistants. My very first project was a personal use AI app powered by Assistants API, it took months of trial and error, and efforts to decipher their docs convoluted language to get it to work. And now, another change. I wouldn't mind if it were for the best and if the language wasn't misleading, but still I don't like the idea of leaving the browsing tool on OA hands, I've seen enough arbitrary shady stuff on the IOS app to know better.
🌐
Reddit
reddit.com › r/mlquestions › trying to learn how to use openai api for a personal project -- advice appreciated
r/MLQuestions on Reddit: Trying to learn how to use OpenAI API for a personal project -- advice appreciated
December 14, 2023 -

Hey everyone,

I'm a college student working on a personal project and I think I might be able to use the OpenAI API to help me but I'm not sure if it's the best tool or how to use it properly.

I've written a script that scrapes my college's daily dining website and collects a list of all the dishes being served that day. The idea is that the user could enter some input like "protein, Asian" and the code could find the foods that best match the user's query, in this case, "egg fried rice" and "kung pao chicken." Or "vegetarian, healthy, breakfast" would output "overnight oats with blueberries."

OpenAI's API requires you to pay for it, but I don't really understand the pricing structure. if the cost is negligible then I don't mind paying it.

Any help is appreciated, thanks in advance!

🌐
Reddit
reddit.com › r/openai › where is the best place to learn how to use the api and assistants?
r/OpenAI on Reddit: Where is the best place to learn how to use the API and assistants?
January 11, 2024 -

I have begun trying to learn how to use the OpenAI API and also came across the assistants which seem very interesting. I'm having trouble learning how to use these tools simply using OpenAI's guides because I don't have experience with these tools, I have only completed a few computer science courses in college. For reference, I am currently following the quickstart tutorial, and I've set up my OpenAI environment with the key and have the python file to test but it is not running. I believe this is because I have to either buy credits or use the playground but I'm not sure how to use the playground.

Is there any good website/video or anything that can help learn how to use this correctly?

Top answer
1 of 3
1
How do I connect terminal to the playground?
2 of 3
1
To learn how to use the OpenAI API and assistants effectively, you might find the following resources helpful: DataCamp's Beginner's Guide: This guide offers a hands-on tutorial and best practices for using the OpenAI API. It covers industry use cases like chatbots, sentiment analysis, image recognition, and gaming. It also provides a step-by-step guide to making your first API call, exploring different engines, and experimenting with prompts [❞] . Pluralsight's Guide on OpenAI Assistants: This guide focuses on creating and using OpenAI assistants. It explains how to create an assistant using the Playground UI and Python APIs, covering key concepts like Assistant, Thread, Message, Run, and Run Step. The guide is practical, providing examples like building a travel assistant and integrating various tools like Functions, Code Interpreter, and Retrieval [❞] . OpenAI Cookbook: The OpenAI Cookbook offers an overview of the Assistants API, including examples and explanations of key concepts and operations. It discusses how to submit messages, create threads, and run assistants, providing code samples and explaining how API calls work asynchronously [❞] . These resources are tailored for beginners and provide practical, step-by-step instructions and examples to help you understand and utilize the OpenAI API and assistants effectively. They should be particularly useful for getting a deeper understanding of how to use these tools beyond the basic guides provided by OpenAI. source: chat GPT 4
🌐
Reddit
reddit.com › r/openai › how to create an app that uses the api and release it to the public?
r/OpenAI on Reddit: How to create an app that uses the API and release it to the public?
August 15, 2024 -

I scoured through but couldn't find a way to have users use their own key (without entering them of course). I know using my own key might get heavy on the bank. Here's what I've considered so far:

  • Login with OpenAI authentication doesn't exist (similar to the "Login with Google/Facebook" auth) which could get the api key for any user.

  • Even though I can include a Guid the API key being used is the same and there's no way to rate limit per user. Ideally I'm okay with rate limiting users to $5 worth of spend as a CAC.

  • Afaik Azure OpenAI API doesn't provide the per user rate limit either and has a rate limit around the total usage of the API.

My options now are to host my own server and database and rate limit by user IPs. I'll need to take care of the overhead of things that could go wrong (someone IP switching and screwing up my credits, load balancing and rate limiting the backend server, someone using the API for their own use).

Is there any way around this?

🌐
Reddit
reddit.com › r/langchain › open ai apis are the only reliable apis in production
r/LangChain on Reddit: Open AI APIs are the only reliable APIs in production
May 15, 2024 -

After having worked with Anthropic API and Gemini 1.5 Pro & Flash APIs. OpenAI API seems to be the only reliable API service available.
With Anthropic - I am unable to add credits to their console, even after multiple mails to the customer support I have received no resolution. So I finally have to give up hope and just use Open AI.
With Google Gemini - The APIs are absolutely unreliable, you are not sure when the APIs will return an answer and when they will not. I keep encountering error from the API something like: StopCandidateException: finish_reason: RECITATION
So again no point in using Gemini, just switch to Open AI.

Hoping this experience will benefit the community.

Anyone else having these issues.

🌐
Reddit
reddit.com › r/localllama › i created an app that allows you use openai api without api key (through desktop app)
r/LocalLLaMA on Reddit: I created an app that allows you use OpenAI API without API Key (Through desktop app)
April 15, 2025 - If OpenAI wants to prevent this then it’s up to them to do that. ... Agreed, there have been browser extensions that work this way since ChatGPT launched (like harpa.ai) and they've never been taken down. ... Not sure why you're down voted. This (through the web app and VMs) was the meta game for several months lol ... Yup, a lot of people started building on top of questionable APIs or downloaded wrappers that allowed them to use the website as an API.