Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.bfl.ml/llms.txt

Use this file to discover all available pages before exploring further.

FLUX Outpainting extends an image naturally in any direction, filling new regions with contextually coherent content in a single call. Useful for aspect-ratio changes, banner generation, social media reformatting, or giving a composition more room to breathe.

Example output

Drag the slider to compare the input image padded onto the target canvas (left) with the outpainted result (right). No prompt was used β€” the model extended the scene on its own.

Endpoint

Submit an outpainting job:
POST https://api.bfl.ai/v1/flux-tools/outpainting-v1
x-key: $BFL_API_KEY
Poll for the result:
GET https://api.bfl.ai/v1/get_result?id=<TASK_ID>
x-key: $BFL_API_KEY

Quick start

The API uses an asynchronous workflow:
1

Submit an outpainting request

POST your input image (base64) and the target canvas dimensions to the endpoint. The model extends the existing scene naturally β€” no prompt is needed.
2

Poll for the result

Use the returned polling_url to check status until the image is ready.
#!/usr/bin/env python3
import base64
import os
import time
import requests

API_KEY = os.environ["BFL_API_KEY"]
BASE = "https://api.bfl.ai"
HEADERS = {"accept": "application/json", "x-key": API_KEY, "Content-Type": "application/json"}

IMAGE_PATH = "/path/to/input.png"
WIDTH, HEIGHT = 1024, 1024
REFERENCE_OFFSET_X = 100   # None = center horizontally
REFERENCE_OFFSET_Y = 50    # None = center vertically

with open(IMAGE_PATH, "rb") as f:
    image_b64 = base64.b64encode(f.read()).decode()

payload = {
    "input_image": image_b64,
    "width": WIDTH,
    "height": HEIGHT,
    "output_format": "png",
}

if REFERENCE_OFFSET_X is not None:
    payload["reference_offset_x"] = REFERENCE_OFFSET_X
if REFERENCE_OFFSET_Y is not None:
    payload["reference_offset_y"] = REFERENCE_OFFSET_Y

submit = requests.post(f"{BASE}/v1/flux-tools/outpainting-v1", headers=HEADERS, json=payload)
submit.raise_for_status()
meta = submit.json()

task_id = meta["id"]
poll_url = meta.get("polling_url", f"{BASE}/v1/get_result?id={task_id}")

while True:
    r = requests.get(poll_url, headers={"accept": "application/json", "x-key": API_KEY})
    r.raise_for_status()
    result = r.json()

    status = result.get("status")
    if status == "Ready":
        print("Result URL:", result["result"]["sample"])
        break
    if status in {"Error", "Request Moderated", "Content Moderated", "Task not found"}:
        raise RuntimeError(f"Outpainting failed with status: {status} | payload: {result}")

    time.sleep(1)

Request parameters

ParameterTypeRequiredDescription
input_imagebase64 stringYesReference image to expand
widthintegerYesTarget canvas width in pixels (>=64)
heightintegerYesTarget canvas height in pixels (>=64)
reference_offset_xintegerNoLeft offset (px) of the reference image’s top-left corner on the canvas. Negative values allowed. None = center horizontally
reference_offset_yintegerNoTop offset (px) of the reference image’s top-left corner on the canvas. Negative values allowed. None = center vertically
auto_cropbooleanNoIf true, crop the reference image to the canvas bounds when it extends beyond the edges. Defaults to false (out-of-bounds placements return 422)
output_formatstringNopng (default) or jpeg

Image placement

reference_offset_x and reference_offset_y set the top-left corner of the reference image on the output canvas. Drag the reference below to see how the offsets relate to the canvas: You have two options for placing the reference image on the output canvas:
  • Centered (default) β€” provide the image, set width and height, and leave reference_offset_x / reference_offset_y as None. The image is centered automatically.
  • Custom position β€” set reference_offset_x and reference_offset_y to control exactly where the top-left corner of the reference image lands on the canvas. Negative values are allowed; if any part of the reference falls outside the canvas, either set auto_crop: true or the request will return 422.

Response format

Initial response

{
  "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "polling_url": "https://api.bfl.ai/v1/get_result?id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
Always poll the URL returned in the response.

Polling response (success)

{
  "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "status": "Ready",
  "result": {
    "sample": "https://delivery.bfl.ai/..."
  }
}
When status is "Ready", use result.sample.
Signed delivery URLs are only valid for 10 minutes. Retrieve your result within this timeframe.

Tips for best results

  • The model extends the existing scene naturally. The endpoint is tuned to continue the input image’s content, lighting, and composition on its own.
  • The model was trained on green, blue, and magenta fill colors and performs best with those internally β€” no caller action needed; the server handles fill colors automatically.
  • Keep total output dimensions reasonable. Very large canvases or extreme aspect ratios may reduce quality.

Troubleshooting

  • 403 Forbidden β€” your API key is missing or your project doesn’t have access to this endpoint.
  • 422 / validation errors β€” check base64 encoding and that width / height are present and at least 64. The endpoint rejects unknown fields: use reference_offset_x / reference_offset_y (not the older bbox_x1 / bbox_y1).
  • Visible seams β€” give the model more canvas room around the reference image.
For the full list of HTTP status codes and polling response types returned by the API, see the Errors reference.