According to the way which OpenAI officially offered, I can't execute the API because of some geographic reasons. Although I think I have changed the proxies, it still cause error as APICONNECTIONERROR and 429.
It is so weird because I believe I only send one request, and when I do the same options in JAVA before with OKHttp (execute OpenAI API), it is 429 too. So weird!
import requests
import openai
proxies = {
# MY PROXY HOST
"http":"http://127.0.0.1:7890",
"https":"https://127.0.0.1:7890"
}
requests.session().proxies.update(proxies)
openai.api_key = "MYAPIKEY"
completion = openai.chat.completions.create(
model="gpt-3.5",
messages=[{"role": "user"},
{"content": "Tell me about math"}]
)
print(completion)
So after I changed the way, and it succeeds finally. Here is code.
import requests
# OpenAI API URL
url = "https://api.openai.com/v1/engines/gpt-3.5-turbo-instruct/completions"
headers = {
"Authorization": "Bearer MYAPIKEY",
"Content-Type": "application/json"
}
proxies = {
# MY PROXY HOST
"http": "http://127.0.0.1:7890",
"https": "http://127.0.0.1:7890",
}
data = {
"prompt": "Tell me about math",
"max_tokens": 60
}
response = requests.post(url, json=data, headers=headers, verify=False)
In this traditional way, I can use it successfull, but I still want to improve it,and I also have heard if you execute OpenAI's API in this way, your APIKEY will be banned.
Can anybody help me with this, like improve the executing way adding some other codes? Or how can I execute it with OpenAI's officially package? I have stuckwith n it for while, waiting for your answer please. :)
That's the official fail record:
{'id': 'cmpl-8TCp9eVM1pOAU8PBmYbpfABJTBKaZ', 'object': 'text_completion', 'created': 1701971499, 'model': 'gpt-3.5-turbo-instruct', 'choices': [{'text': '\n\nMath, also known as mathematics, is the study of numbers, quantity, and space. It is a fundamental subject in education and plays a crucial role in various fields such as science, engineering, and finance.\n\nThe study of math involves learning about mathematical concepts, theories, and techniques to solve problems', 'index': 0, 'logprobs': None, 'finish_reason': 'length'}], 'usage': {'prompt_tokens': 4, 'completion_tokens': 60, 'total_tokens': 64}}
That's the success record in traditional way.
Traceback (most recent call last):
File "C:\User\PycharmProjects\pythonProject\main.py", line 42, in <module>
completion = openai.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\User\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_utils\_utils.py", line 301, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\resources\chat\completions.py", line 598, in create
return self._post(
^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 1096, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 856, in request
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 894, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "C:\User\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 966, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 894, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 966, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\openai\_base_client.py", line 908, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
These are the officially supported OpenAI countries: https://platform.openai.com/docs/supported-countries
The client docs have an example of configuring a proxy:
import httpx
from openai import OpenAI
client = OpenAI(
# Or use the `OPENAI_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=httpx.Client(
proxies="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)