Create a new OpenaiApiClient instance.
The constructor options to use.
Readonly
apiDefaults for API requests. Can be overriden in individual method calls.
Optional
Readonly
cacheAPI response cache
Readonly
cacheDefault options for caching for api requests. Can be overriden in individual method calls.
Readonly
clientAPI client instance
Protected
concurrencyReadonly
eventsEvent emitter for cache events
Readonly
queueGlobal queue for sending requests to the openai api.
Readonly
retryDefault options for async retry for api requests. Can be overriden in individual method calls.
Protected
Readonly
sendGeneric function for sending requests to the openai api. This is used for all the API endpoints. It handles retrying, cache, hashing, and emitting events. This method is bound to the instance on initialization because it gets wrapped with a concurrency controller in the constructor.
Optional
cache?: IResponseCacheOptionsOptional
retry?: IAsyncRetryOptionsStatic
Readonly
concurrencyOptions for concurrency control. These affect all API requests.
All emitted event names. Please note that the cache also emits events.
Protected
_chatSend chat request to the openai API. This is used by all the preset methods, the public methods: chat3_8, chat3_16, and chat4_8.
The request object to send to the openai api.
The retry options.
The cache options.
Protected
_transcribeSend transcribe (speech to text) request to the openai API.
The request object to send to the openai api.
The retry options.
The cache options.
Protected
assertProtected
deleteDelete all options that are undefined or equal to the default value. The response cache uses hashed options to determine if the request has already been made. Removing default values and undefined values normalizes the options object so it hashes the same.
The options to delete from.
The default values to compare against.
Protected
emitSend a chat completion request to the openai api with a max_tokens cap of 16384.
The options to use.
Send a chat completion request to the openai api with a max_tokens cap of 4096.
The options to use.
Send a gpt4 chat completion request to the openai api with a max_tokens cap of 8k.
The options to use.
Protected
handleProtected
handleHandle cache options.
Optional
cacheOptions: IResponseCacheOptionsThe cache options to handle.
Protected
handleHandle chat options.
The options to handle.
Protected
handleHandle the options passed to the constructor.
The options to handle.
Protected
handleHandle retry options.
Optional
retryOptions: IAsyncRetryOptionsThe retry options to handle.
Protected
handleHandle transcribe options.
The options to handle.
Protected
lowerLower the concurrency to prevent rate limiting. Automatically raises the concurrency again after a delay.
The amount to lower the concurrency by.
The amount to raise the concurrency by after a delay.
The delay to wait before raising the concurrency again. This is randomized by +/- 5 seconds to prevent multiple requests from affecting the concurrency at the same time.
Send a transcribe completion request to the openai api.
The options to use.
Generated using TypeDoc
A class representing an OpenAI API client.