Pingora is a product similar to Nginx developed by Cloudflare. It has been battle-tested in Cloudflare’s environment for several years, reportedly handling over 40 million internet requests per second.
Despite Cloudflare often comparing Pingora to Nginx, they are not directly comparable from a user’s perspective. After all, Nginx is a standalone software application, while Pingora is a library. Therefore, if you’re simply looking to replace Nginx with Pingora, you might be disappointed. If that’s the case, you might be interested in higher-level applications based on Pingora, such as pingap.
However, because it is a library, Pingora offers much greater flexibility. This means we can package our application along with Nginx-like functionalities (though at the cost of convenience).
Example Usage
For example, in a typical frontend application like React or Vue, after building the static files, you would deploy them with Nginx and point the entry directly to the index.html file. If there’s routing involved, an Nginx configuration might look something like this:
location / {
try_files $uri $uri/ /index.html;
}
Since Pingora is not out-of-the-box ready, we need to implement this logic manually in our code.
Adding Dependencies
First, add Pingora as a dependency in your project:
[dependencies]
pingora = { version = "0.4.0", features = ["lb"] }
To embed front-end resources into the binary, use the following library:
[dependencies]
rust-embed = "8.5.0"
Create an embedded resource structure:
// asset.rs
use rust_embed::RustEmbed;
#[derive(RustEmbed)]
#[folder = "../dist"]
pub struct Asset;
Here, dist is the directory containing the built static files from the front end.
Implementing Reverse Proxy Logic
Define a proxy server structure:
pub(crate) struct ServerProxy {
pub port: u16,
pub target_host: String,
lb: Arc<LoadBalancer<RoundRobin>>,
}
Besides the port and target host, a load balancer (lb) is also defined, which can be used for backend API requests.
Implement an initialization factory function:
/// Creates a new instance of the load balancer.
///
/// # Arguments
/// * `port`: The port number to listen on.
/// * `up_streams`: A list of upstream server addresses.
/// * `target_host`: The address of the target host.
///
/// # Returns
/// An initialized load balancer instance.
pub fn new(port: u16, up_streams: Vec<String>, target_host: String) -> Self {
// Create a new load balancer instance
let mut lb = LoadBalancer::try_from_iter(up_streams).unwrap();
// Add health checks
let hc = TcpHealthCheck::new();
lb.set_health_check(hc);
lb.health_check_frequency = Some(Duration::from_secs(3));
// Start the background service for health checks
let background = background_service("health check", lb);
let lb = background.task();
Self {
port,
lb,
target_host,
}
}
Add health checks directly to the load balancer.
Implement the ProxyHttp trait for ServerProxy:
#[async_trait]
impl ProxyHttp for ServerProxy {
type CTX = ();
async fn request_filter(&self, _session: &mut Session, _ctx: &mut Self::CTX) -> Result<bool>
where
Self::CTX: Send + Sync,
{
// Rate limiting can be implemented here
Ok(false)
}
async fn proxy_upstream_filter(
&self,
session: &mut Session,
_ctx: &mut Self::CTX,
) -> Result<bool>
where
Self::CTX: Send + Sync,
{
// Handle `/api` paths by bypassing to the next handler; treat other paths as frontend routes
let path = session.as_ref().req_header().uri.path();
let method = session.as_ref().req_header().method.as_ref();
if path.starts_with("/api") {
info!("request path: {}, method: {}", path, method);
return Ok(true);
}
// For other paths, serve the frontend content
let start_path = path.strip_prefix('/').unwrap_or_default();
let send_body = session.req_header().method != Method::HEAD;
let content = match Asset::get(start_path) {
Some(content) => Bytes::copy_from_slice(&content.data),
None => {
let path = "index.html";
Asset::get(path)
.map(|b| Bytes::copy_from_slice(&b.data))
.unwrap_or(Bytes::from_static(b"404 Not Found"))
}
};
// Construct and write response headers
let header = web_response(path, content.len())?;
session.write_response_header(Box::new(header), !send_body).await?;
// Write response body if necessary
if send_body {
session.write_response_body(Some(content), true).await?;
}
Ok(false)
}
async fn upstream_peer(
&self,
_session: &mut Session,
_ctx: &mut Self::CTX,
) -> Result<Box<HttpPeer>> {
// Select an upstream server using the load balancing strategy
let upstream = self.lb.select(b"", 256).unwrap();
info!("upstream peer: {:?}", upstream);
// Create and return an HttpPeer instance
let peer = Box::new(HttpPeer::new(
upstream,
false,
self.target_host.to_owned(),
));
Ok(peer)
}
}
The request_filter method can handle initial request processing, such as rate limiting. proxy_upstream_filter decides whether to forward the request to the upstream server, and upstream_peer selects which upstream server the request should be forwarded to.
Registering the Service
Finally, implement a method to register the ServerProxy instance as a service:
/// Registers the current instance as a service on the specified server.
///
/// # Parameters
/// * `server`: A mutable reference to the server instance to add the HTTP proxy service.
pub fn into_service(self, server: &mut Server) {
// Build the listening address based on the instance's port property
let addr = format!("0.0.0.0:{}", self.port);
// Create an HTTP proxy service, passing in the server configuration and the current instance
let mut service = http_proxy_service(&server.configuration, self);
// Add a TCP listener to the HTTP proxy service
service.add_tcp(&addr);
// Add the created HTTP proxy service to the server
server.add_service(service);
// Log the address the proxy service is listening on
info!("PP listening on {}", addr);
}
Usage
Here’s how you can use it:
fn main() {
// Initialize the logging system with a compact format and set the maximum log level to INFO
tracing_subscriber::fmt()
.event_format(format().compact())
.with_max_level(Level::INFO)
.init();
// Create and initialize the server object
let mut pp_server = Server::new(Some(Opt::parse_args())).unwrap();
pp_server.bootstrap();
// Define upstream server addresses
let server1 = format!("127.0.0.1:{}", 1200);
let server2 = format!("127.0.0.1:{}", 1201);
let up_streams = vec![server1, server2];
// Define the target host address
let target_host = "127.0.0.1".to_owned();
// Create a proxy object
let pp = ServerProxy::new(
3000,
up_streams,
target_host,
);
// Register the proxy object as a service on the server
pp.into_service(&mut pp_server);
// Log that the proxy service is about to start
info!("starting...");
// Run the server and enter the event loop
pp_server.run_forever();
}
After starting the service, you have a combination of Nginx, a front-end project, and a load balancer, all implemented in Rust.
Conclusion
As you can see, for small projects or those that do not require high customization, using Pingora directly does not offer significant advantages over using Nginx. In fact, many of these features can be implemented within a web framework without considering high concurrency.
If you’re looking for a pure Rust implementation that closely mirrors Nginx’s capabilities, consider the following alternatives: