The projects are comprised of a GRPC chatbox server that handles messages from browsers. When a browser sends a message, it's passed to Envoy Proxy (for rate limitting) and then finally reach our chatbox server. In there, the GRPC message is decomposed and send to OpenAI Assistant API to get the response.
The Assistant AI offered by OpenAI is very powerful and can consolidate the context of answering question about me correctly and also answer some private questions politely. Some questions like asking about gender, race might be unrevealed while the model might sometimes answer questions that are not related to me. My personal experience is that the model is very good when acting as a second brain for company to search information, or acting as a helpdesk.
When trying to implement the solution, i find a roadblock when having the correct configuration for envoy proxy. Here is an example of envoy-proxy.yaml that accepts HTTP 1/ 2 from port 6005 and upgrade it to GRPC then forwards to port 5558, which is where the grpc server is serving. It also handles CORS and passing GRPC errors to HTTP response by headers grpc-status and grpc-message