Back to Blog
José Manuel Requena Plens

Serve Virtual Files with Nginx: Beyond the Document Root

Learn how to configure Nginx to serve dynamic or protected files like robots.txt, security.txt, and JSON responses without them physically existing in your web root. A practical guide for containerized apps and managed services.

Cover image for Serve Virtual Files with Nginx: Beyond the Document Root

When deploying a modern web application, especially a containerized one or a service running on a managed platform, you often don’t have easy access to its web root. What happens when you need to serve a standard file like robots.txt or .well-known/security.txt? You can’t just drop it into the folder.

Fortunately, Nginx is more than just a reverse proxy; it’s a powerful tool for manipulating requests and responses. In this guide, we’ll explore how to use Nginx to serve files and content that don’t physically reside in the path of the service being exposed. This is perfect for adding standard files to applications where you can’t—or shouldn’t—modify the underlying image or file system.

The Core Techniques

There are two primary methods to achieve this, each with its own use case.

  1. Using an alias: This directive tells Nginx to serve a request from a file located elsewhere on the filesystem. It’s ideal when you have a static file you want to serve, but it’s not in your application’s document root.

  2. Using return: This directive can stop processing and immediately return a specific response code and content. It’s incredibly efficient for small, simple text files where creating a separate file feels like overkill.

root vs. alias: A common point of confusion is the difference between root and alias. With root, Nginx appends the request URI to the path you specify. With alias, Nginx discards the matched part of the request URI and uses the path you specify directly. For serving a single file, alias is often more direct.


Example 1: The Ubiquitous robots.txt

The robots.txt file tells web crawlers which parts of your site they should or shouldn’t access. It’s a fundamental part of SEO and site management.

Method A: Serve a Local File with alias

Let’s say your application runs from /var/www/app, but you want to manage all your configuration files, including robots.txt, from a central location like /etc/nginx/assets/.

First, create your robots.txt file:

/etc/nginx/assets/robots.txt
User-agent: * Allow: / Sitemap: https://www.example.com/sitemap.xml

Now, configure your Nginx server block to intercept requests for /robots.txt and serve your custom file.

Nginx Site Configuration
server {
    server_name www.example.com;

    # The `location =` block provides an exact match, which is highly efficient.
    # See the Nginx docs for more on the `location` directive.
    location = /robots.txt {
        alias /etc/nginx/assets/robots.txt;
        # Ensure the correct content type is served
        add_header Content-Type text/plain;
    }

    # All other requests go to your main application
    location / {
        proxy_pass http://localhost:3000;
        # ... other proxy settings
    }

}

Method B: Generate Content with return

If your robots.txt is very simple and you don’t want to manage another file, you can embed its content directly in the Nginx configuration. This is fantastic for container-based deployments where you want the entire configuration to be self-contained.

Nginx Site Configuration
server {
    server_name www.example.com;

    location = /robots.txt {
        # The 'return 200' sends a success status code with the content.
        # The text must be a single string in Nginx.
        return 200 "User-agent: *\nDisallow: /private/\n";
        add_header Content-Type text/plain;
    }

    location / {
        proxy_pass http://localhost:3000;
    }
}

Example 2: security.txt for Responsible Disclosure

The security.txt file is a standard proposed by the security.txt project that allows security researchers to easily find information on how to report vulnerabilities. It should be placed at the /.well-known/security.txt path.

Let’s create the file content. Note the use of \n for line breaks.

Nginx Site Configuration
server {
    # ... server config ...

    # Use a case-sensitive match for the well-known path
    location = /.well-known/security.txt {
        return 200 "Contact: security[at]example.com\nExpires: 2026-12-31T23:59:59.000Z\nPreferred-Languages: en\n";
        add_header Content-Type text/plain;
    }

    # ... other locations ...

}

This configuration makes it easy for security researchers to get in touch, enhancing your site’s security posture, without touching your application code.


Example 3: humans.txt for Credits

The humans.txt initiative is for giving credit to the people behind the website. It’s a simple, human-readable text file.

Method A: Using alias

/etc/nginx/assets/humans.txt
# TEAM
# Lead Developer: Jane Doe
# Contact: jane[at]example.com
# Twitter: @janedoe

# SITE
# Last update: 2025-12-18
# Standards: HTML5, CSS3
# Software: Nginx, Astro
Nginx Site Configuration
location = /humans.txt {
    alias /etc/nginx/assets/humans.txt;
    add_header Content-Type text/plain;
}

Method B: Using return

Nginx Site Configuration
location = /humans.txt {
    return 200 "# TEAM\nLead Developer: Jane Doe\nContact: jane[at]example.com\n\n# SITE\nLast update: 2025-12-18\n";
    add_header Content-Type text/plain;
}

Example 4: A Dynamic JSON Health Check

Sometimes you need a simple health check endpoint that returns a JSON response, like {"status": "ok"}. This is often used by load balancers or monitoring tools. Generating this with Nginx is far more efficient than waking up a full application instance just to respond.

Nginx Site Configuration
server {
    # ...

    location = /health {
        # This directive is part of the standard HttpHeadersModule
        # See the docs for more on `add_header`.
        add_header 'Content-Type' 'application/json';

        # Return a simple JSON object
        return 200 '{"status":"ok", "version":"1.2.3"}';
    }

    # This location prevents the health check from appearing in access logs
    location = /health {
        access_log off;
        # ... content from previous block ...
    }

    # ...

}

Example 5: Let’s Encrypt HTTP-01 Challenges

A very common and practical use case is handling validation for SSL certificates from Let’s Encrypt. The http-01 challenge requires serving a specific file from the /.well-known/acme-challenge/ directory. When running multiple services or Docker containers behind a single Nginx proxy, you can’t place the file in each application.

The solution is to have Nginx intercept these requests and serve them from a common, shared directory that your ACME client (like certbot) can write to.

Nginx Site Configuration
server {
    # ...

    location /.well-known/acme-challenge/ {
        # Point all requests to a common directory
        root /var/www/certbot;
    }

    # ... all other locations for your app
}

With this configuration, no matter which server_name receives the validation request, Nginx will always look for the challenge file in /var/www/certbot. Just make sure your ACME client is configured to place its files there.


Example 6: Mobile App Association Files

To create seamless links between your website and your mobile app (Android App Links and iOS Universal Links), you need to host specific JSON files.

  • Android: /.well-known/assetlinks.json
  • iOS: /.well-known/apple-app-site-association (note: no file extension)

Nginx can serve both. The iOS file is particularly interesting because it must be served with the application/json content type.

Nginx Site Configuration
server {
    # ...

    # Android App Links
    location = /.well-known/assetlinks.json {
        alias /etc/nginx/assets/assetlinks.json;
        add_header 'Content-Type' 'application/json';
    }

    # iOS Universal Links
    location = /.well-known/apple-app-site-association {
        alias /etc/nginx/assets/apple-app-site-association;
        # iOS requires this specific content type
        add_header 'Content-Type' 'application/json';
    }

    # ...

}

Conclusion

By leveraging Nginx’s alias and return directives, you gain immense flexibility in how you serve content. This approach decouples standard web files from your application’s codebase, making it easier to manage deployments, secure your environment, and improve performance.

Whether you’re running a Docker container, a managed Node.js service, or a complex Java application, Nginx can act as a powerful gatekeeper that serves these essential “virtual” files on your behalf, simply and efficiently.