Powering AI Capabilities with Apache APISIX and OpenAI API

Powering AI Capabilities with Apache APISIX and OpenAI API

ยท

12 min read

Artificial intelligence (AI) has revolutionized the way we interact with technology and has become an integral part of modern applications. The OpenAI API provides developers with powerful AI capabilities, allowing them to build advanced AI applications with ease.

However, as the usage of AI grows, so does the need for scalable, performant, and secure API integrations. This is where Apache APISIX comes in. Apache APISIX is a high-performance open-source API gateway that provides advanced features for managing and scaling API integrations.

In this blog post, we will explore the benefits of integrating Apache APISIX with the OpenAI API and how you can use Apache APISIX to create a more scalable, performant, and secure AI integration. From proxy caching to security features, we will cover everything you need to know to get started with Apache APISIX and OpenAI API integration. Whether you're an AI developer or a DevOps professional, this blog post is your complete guide to creating a powerful and cost-effective AI integration.

Learning objectives

You will learn the following throughout the article:

  • What are OpenAI API and Apache APISIX?
  • The benefits to using Apache APISIX with OpenAI API.
  • Several Apache APISIX plugins use-cases to enhance OpenAI API.
  • How to create a new Route in APISIX for the OpenAI API.
  • How to add the OpenAI API endpoint as an Upstream for the route.
  • How to configure authentication, rate limiting, and caching for the route as needed.
  • How to test the route to make sure requests are being forwarded correctly to the OpenAI API.

What's OpenAI API?

OpenAI is a cutting-edge platform for creating and deploying advanced artificial intelligence models. These models can be used for a variety of tasks, such as natural language processing, image recognition, and sentiment analysis. One of the key benefits of OpenAI is that it provides an API that developers can use to access these models and incorporate them into their applications.

The OpenAI API is a cloud-based platform that provides access to OpenAI's AI models, including ChatGPT. The API allows developers to integrate AI capabilities into their applications.

ChatGPT is just one of the AI models available through the OpenAI API, and it is particularly well-suited for use cases that require natural language processing and text generation capabilities. For example, ChatGPT can be used to generate text responses in a chatbot, provide text completion suggestions, code completion or answer questions in a conversational interface.

What's Apache APISIX?

Apache APISIX is an open-source cloud-native API traffic management solution that offers API Gateway features to create RESTful APIs that are scalable, secure, and highly available.

By using API Gateway with the OpenAI API, you can easily create and deploy scalable, secure, and high-performance APIs that access the OpenAI models. This will allow you to incorporate the power of OpenAI into your applications and provide a great experience for your users.

What are the benefits to using Apache APISIX with OpenAI API

There are several benefits to using Apache APISIX with OpenAI API:

  1. Scalability: Apache APISIX provides an easy way to manage and scale the OpenAI API, allowing you to handle increased traffic and usage demands.

  2. Performance: Apache APISIX can help to improve the performance of OpenAI API requests by caching responses and reducing latency. Security: Apache APISIX provides security features such as encryption and authentication, making it easy to secure access to the OpenAI API.

  3. Flexibility: Apache APISIX provides a flexible way to manage and control access to the OpenAI API, allowing you to customize and configure your integration as needed.

  4. Monitoring and Analytics: Apache APISIX provides detailed monitoring and analytics, allowing you to track and optimize the performance of your OpenAI API integration.

Apache APISIX plugins to enhance the OpenAI API

There are several Apache APISIX plugins that can be used to enhance the integration with the OpenAI API. Some of the plugins you can use with the OpenAI API include:

  • rate-limiting: To limit the number of API requests and prevent overuse of the OpenAI API.
  • authentication: To secure access to the OpenAI API by implementing authentication and authorization mechanisms.
  • traffic-control: To control the flow of API traffic and ensure consistent performance and stability of the OpenAI API.
  • observability: To monitor and log API requests and responses, providing visibility into the usage and performance of the OpenAI API.
  • caching: To cache API responses and reduce the number of API requests, improving performance and reducing the cost of using the OpenAI API.
  • transformation: To modify API requests and responses, transforming data from one format to another, such as JSON to XML.

Manage OpenAI APIs with Apache APISIX Demo

With enough theoretical knowledge in mind, now we can jump into a practical session. In this example, Apache APISIX is used to create a simple API gateway that accesses the OpenAI API and manages the traffic by creating a route, upstream and enabling some plugins. We are going to interact with OpenAI API Completion endpoint to create a product description generator to generate the product description efficiently and accurately.

For example, a typical request to the API Gateway will look like the below:

curl http://127.0.0.1:9080/openai/product/desc  -X POST -d 
'{
   "model":"text-davinci-003",
   "prompt":"Write a brief product description for Apple 13 pro",
   "temperature":0,
   "max_tokens":256
}'

And, we will get as an output:

{
   "object":"text_completion",
   "model":"text-davinci-003",
   "choices":[
      {
         "text":"\n\nThe Apple 13 Pro is the perfect laptop for those who need a powerful and reliable machine. 
It features a 13-inch Retina display with True Tone technology, a powerful 8th-generation Intel Core i5 processor, 8GB of RAM, and a 256GB SSD for storage. 
It also has a Touch Bar and Touch ID for added security and convenience. With up to 10 hours of battery life, you can stay productive all day long. 
The Apple 13 Pro is the perfect laptop for those who need a powerful and reliable machine.",
         "index":0,
         "finish_reason":"stop"
      }
   ],
   "usage":{
      "prompt_tokens":9,
      "completion_tokens":109,
      "total_tokens":118
   }
}

Prerequisites

  • Must be familiar with fundamental OpenAI API completion model concepts.
  • Create an OpenAI API Key: To access the OpenAI API, you will need to create an API Key. You can do this by logging into the OpenAI website and navigating to the API Key management page.
  • Docker installed on your machine to run APISIX.
  • Basic knowledge about couple of APISIX core concepts such as Route, Upstream and Plugin.

Set up the project

This first thing you clone the apisix-docker project repo from GitHub:

git clone https://github.com/apache/apisix-docker.git

Open the project folder in your favorite code editor. The tutorial leverages VS Code.

Install and run Apache APISIX

To run Apache APISIX, you can follow these steps:

Open a new terminal window and run docker compose up command from the root folder of the project:

docker compose up -d

Above command will run Apache APISIX and etcd together with Docker.

We installed APISIX using Docker in this demo. However, there are other options to install it on installation guide.

Create an Upstream for the OpenAI API

Once the set up is complete, we will create an Upstream object in APISIX using its Admin API. "Upstream" in APISIX refers to the backend servers that are responsible for serving the actual request data.

In our case, we define the upstream API server at api.openai.com with a single node and https scheme used when communicating securely with the Upstream:

curl "http://127.0.0.1:9180/apisix/admin/upstreams/1" -H "X-API-KEY: edd1c9f034335f136f87ad84b625c8f1" -X PUT -d '
{
  "name": "OpenAI API upstream",
  "desc": "Add the OpenAI API domain as the upstream",
  "type": "roundrobin",
  "scheme": "https",
  "nodes": {
    "api.openai.com:443": 1
  }
}'

Create a new plugin config

Now we create a new plugin config with proxy-rewrite plugin enabled.

The proxy plugin is used to redefine requests to the OpenAI API completion endpoint. The plugin's configuration includes options to set the URL for the API endpoint, pass along the OpenAI API key as a header, and with the Content-Type header set to application/json.

curl http://127.0.0.1:9180/apisix/admin/plugin_configs/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' 
{
   "plugins":{
      "proxy-rewrite":{
         "uri":"/v1/completions",
         "host":"api.openai.com",
         "headers":{
            "Authorization":"OpenAI API Key",
            "Content-Type":"application/json"
         }
      }
   }
}'

Set up a Route for the OpenAI completion endpoint

In the next step, we set up a new Route in APISIX to handle POST requests with new custom API Gateway URI path /openai/product/desc and we give references to created the upstream and plugin config in the previous steps by their unique Ids.

curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
   "name":"OpenAI API completion route",
   "desc":"Create a new route in APISIX for the OpenAI API completion endpoint",
   "methods":[
      "POST"
   ],
   "uri":"/openai/product/desc",
   "upstream_id":"1",
   "plugin_config_id":1
}'

Additionally, the route is set up with retries, a timeout, and a keepalive timeout to ensure robust and resilient communication with the OpenAI API.

Test With a Curl Request

To test the API, you can make a POST request to the endpoint /openai/product/desc using a tool like cURL or Postman. The API Gateway will forward the request to the OpenAI API completion endpoint and return the results successfully.

curl http://127.0.0.1:9080/openai/product/desc  -X POST -d 
'{
   "model":"text-davinci-003",
   "prompt":"Write a brief product description for Apple 13 pro",
   "temperature":0,
   "max_tokens":256
}'

Great! We got response from the actual completion endpoint:

HTTP/1.1 200 OK
Content-Type: application/json
...
{
   "object":"text_completion",
   ...
   "choices":[
      {
         "text":"\n\nThe Apple 13 Pro is the perfect laptop...",
         "index":0,
         "logprobs":null,
         "finish_reason":"stop"
      }
   ],
...
}

Create a new consumer and add authentication

Up to now, our API Gateway product description endpoint /openai/product/desc is public and accessible by unauthorized users (Although the communication between APISIX and OpenAI API is secured with the API Key in the header). In this section, we will enable the authentication feature to disallow unauthorized requests to our API.

To do so, we need to create a new consumer for our endpoint and add basic-auth plugin for the existing plugin config so that only allowed user can access them.

The below command will create our new consumer1 with its credentials such as username1 and password1:

curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "username": "consumer1",
    "plugins": {
        "basic-auth": {
            "username": "username1",
            "password": "password1"
        }
    }
}'

Now we update the existing plugin config and append basic-auth plugin to let APISIX's route check the request header with the API consumer credentials each time APIs are called:

curl http://127.0.0.1:9180/apisix/admin/plugin_configs/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' 
{
   "plugins":{
      "proxy-rewrite":{
         "uri":"/v1/completions",
         "host":"api.openai.com",
         "headers":{
            "Authorization":"OpenAI API Key",
            "Content-Type":"application/json"
         }
      },
      "basic-auth":{

      }
   }
}'

Now only if we provide the correct user credentials in the request and access the same endpoint, we can get the expected response from OpenAI API:

curl -i -u username1:password1 http://127.0.0.1:9080/openai/product/desc  -X POST -d \
'{
   "model":"text-davinci-003",
   "prompt":"Write a brief product description for Apple 13 pro",
   "temperature":0,
   "max_tokens":256
}'

Apply rate limiting policies for serverless APIs

In this section, we will protect our product description endpoint from abuse by applying a throttling policy. In Apache APISIX Gateway we can apply rate limiting to restrict the number of incoming calls.

Apply and test the rate-limit policy

With the existing route configuration, we can apply a rate-limit policy with limit-count plugin to protect our API from abnormal usage. We will limit the number of API calls to 2 per 60s per API consumer.

To enable limit-count plugin for the existing route, we need to add the plugin to plugins list in our Json plugin configuration:

curl http://127.0.0.1:9180/apisix/admin/plugin_configs/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' 
{
   "plugins":{
      "proxy-rewrite":{
         "uri":"/v1/completions",
         "host":"api.openai.com",
         "headers":{
            "Authorization":"OpenAI API Key",
            "Content-Type":"application/json"
         }
      },
      "basic-auth":{

      },
      "limit-count":{
         "count":2,
         "time_window":60,
         "rejected_code":403,
         "rejected_msg":"Requests are too frequent, please try again later.",
         "key_type":"var",
         "key":"remote_addr"
      }
   }
}'

Apache APISIX will handle the first two requests as usual. However, a third request in the same period will return a 403 HTTP Forbidden code with our custom error message:

curl -i -u username1:password1 http://127.0.0.1:9080/openai/product/desc  -X POST -d \
'{
   "model":"text-davinci-003",
   "prompt":"Write a brief product description for Apple 13 pro",
   "temperature":0,
   "max_tokens":256
}'

# After the first call

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 752
Connection: keep-alive
X-RateLimit-Limit: 2
X-RateLimit-Remaining: 1

# After the second call

HTTP/1.1 403 Forbidden

{"error_msg":"Requests are too frequent, please try again later."}

Configure caching for the OpenAI API response

Apache APISIX proxy caching is a feature of Apache APISIX that allows you to cache API responses and serve cached responses to subsequent requests. This can help to reduce the number of API requests which means the reduction of the usage cost of OpenAI API, improve the performance of your API integration, and reduce the load on the API server.

Apache APISIX provides fine-grained control over the caching behavior, allowing you to specify the cache expiration time, the conditions for cache invalidation, and other caching policies.

In the below configuration, we will define proxy-cache plugin together with other plugins that we want to cache only successful product description responses from the POST method of Open AI API completion endpoint.

curl http://127.0.0.1:9180/apisix/admin/plugin_configs/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' 
{
   "plugins":{
      "proxy-rewrite":{
         "uri":"/v1/completions",
         "host":"api.openai.com",
         "headers":{
            "Authorization":"OpenAI API Key",
            "Content-Type":"application/json"
         }
      },
      "basic-auth":{

      },
      "proxy-cache":{
         "cache_key":[
            "$uri",
            "-cache-id"
         ],
         "cache_method":[
            "POST"
         ],
         "cache_http_status":[
            200
         ],
         "hide_cache_headers":true
      }
   }
}'

We will send multiple requests to the /openai/product/desc path and we should receive HTTP 200 OK response each time. However, the Apisix-Cache-Status in the response shows MISS meaning that the response has not cached yet when the request hits the route for the first time. Now, if you make another request, you will see that you get a cached response with the caching indicator as HIT.

The response looks like as below:

HTTP/1.1 200 OK
โ€ฆ
Apisix-Cache-Status: MISS

When you do the next call to the service, the route responds to the request with a cached response since it has already cached in the previous request:

HTTP/1.1 200 OK
โ€ฆ
Apisix-Cache-Status: HIT

Summary

Apache APISIX and OpenAI API integration involves combining the features of Apache APISIX, an open-source, high-performance microservices API gateway, with the advanced artificial intelligence capabilities of the OpenAI API to enhance the functionality and performance of applications. With this integration, developers can leverage the scalability and performance of Apache APISIX to manage microservices while leveraging the cutting-edge AI capabilities of OpenAI to deliver sophisticated and advanced features to their users.

At later stages, you can deploy both APISIX and OpenAI runtime code to an application server or any public cloud to make them available in the production.

Throughout the post, we demonstrated only a few examples of Apache APISIX plugins that can be used with the OpenAI API. You can choose the plugins that best meet your needs and customize your Apache APISIX and OpenAI API integration to meet the specific requirements of your applications.

Community

๐Ÿ™‹ Join the Apache APISIX Community ๐Ÿฆ Follow us on Twitter ๐Ÿ“ Find us on Slack

ย