Automation (Recommended)

Summary
  1. Remove all FTP and API code from your legacy projects.
  2. Replace it with simple file copy code (copy from your sync folders instead).
  3. Run the DNFileVault sync script to keep those folders up-to-date — don't embed FTP/API code inside your projects.
Automation Quickstart

Thanks for reaching out! I'd be happy to help you get started with automation.

API Documentation: Here is our API "cookbook" markdown file that documents endpoints and authentication: DNFILEVAULT_MASTER_GUIDE.md

Prefer a browsable API reference? Use our ReDoc page: DNFileVault API Reference

GitHub Repository: Browse all scripts, documentation, and examples: github.com/rickfortier/dnfilevault_scripts

Sync Script: Use a starter script to authenticate and synchronize your purchases + groups into a local "sync folder". Then your existing codebase only needs to copy files from that folder.

Follow the instructions and put your email address and password in environment variables for safety.

Optional — DIY approach: If you want to learn how to create/modify scripts yourself, Cursor (`cursor.sh`) is excellent for generating and debugging automation code (paste errors back into the prompt and it will fix them).

Timeouts? Check your User-Agent

DNFileVault has anti-scanner protection on the API. If requests look "bot-like" (missing/short User-Agent or default tooling), responses may be intentionally slowed to protect the service. This is not a ban.

Recommended settings:
- Always set a descriptive User-Agent (don't use blank/default)
- Use streaming downloads
- Use longer timeouts for large files (e.g., 300s)

Good example User-Agent:
DNFileVaultClient/1.0 ([email protected])

Python requests example:

import os
import requests

BASE_URL = "https://api.dnfilevault.com"

email = os.environ["DNFILEVAULT_EMAIL"]
password = os.environ["DNFILEVAULT_PASSWORD"]

headers = {
    "User-Agent": "DNFileVaultClient/1.0 ([email protected])"
}

login = requests.post(
    f"{BASE_URL}/auth/login",
    json={"email": email, "password": password},
    headers=headers,
    timeout=60,
)
login.raise_for_status()
token = login.json()["token"]

headers["Authorization"] = f"Bearer {token}"

files = requests.get(
    f"{BASE_URL}/groups/2/files",
    headers=headers,
    timeout=60,
).json()

# Large downloads: stream + longer timeout
resp = requests.get(
    f"{BASE_URL}/download/<uuid_filename>",
    headers=headers,
    stream=True,
    timeout=300,
)
try:
    resp.raise_for_status()
    with open("download.zip", "wb") as f:
        for chunk in resp.iter_content(chunk_size=1024 * 1024):
            if chunk:
                f.write(chunk)
finally:
    resp.close()