After local deployment, you can access localhost:7860/docs. However, if it’s inaccessible, you can check GitHub or other documents directly. I wrote this article to record the txt2img API parameter types I use in my work.

Since I focus more on architectural exterior rendering, I don’t need many details related to characters, so fewer settings are required.

For more details, you can refer to this article: Comprehensive Stable Diffusion WebUI API Call Example, Including ControlNet and Segment Anything APIs (with JSON examples)

Online Base64 Viewer: https://www.lddgo.net/convert/base64-to-image
Online Image to Base64 Converter: https://www.lddgo.net/convert/imagebasesix
SD API GitHub Introduction: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API
ControlNet GitHub API Introduction: https://github.com/Mikubill/sd-webui-controlnet/wiki/API

I use HuiShi UI
Because of the Chinese interface and being new to SD development, some API parameters were not very clear. The ControlNet GitHub introduction is quite detailed with parameter examples which helps in quick understanding.

Below is my basic SD configuration. I first manually test the overall output effect, and developers can synchronize the configuration.

![Image Description](https://cdn.bimath.com/blog/pg/Snipaste_2026-01-04_15-27-44.png)
**ControlNet Control Area**
![Image Description](https://cdn.bimath.com/blog/pg/Snipaste_2026-01-04_15-27-53.png)
**WebUI Output**
![Image Description](https://cdn.bimath.com/blog/pg/Snipaste_2026-01-04_15-28-00.png)

Based on the configuration above, the parameters in SD can be translated to the following JSON:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
{
"prompt": "(ultra-realistic industrial daylight rendering:1.6)", // Prompt
"negative_prompt": "curved surfaces", // Negative Prompt
"seed": -1, // Random Seed
"batchSize": 1, // Batch Size
"iterations": 1, // Iteration Count
"steps": 20, // Sampling Steps
"cfg_scale": 7, // CFG Scale
"width": 1024, // Width
"height": 512, // Height
"override_settings": {
"sd_model_checkpoint": "realvisxlV50_v50Bakedvae.safetensors [6a35a78557]",
"sd_vae": "None"
}, // Override previous web settings, e.g., single request model or CLIP skip. If vae has no special case set to automatic, specifying it might cause RuntimeError: Expected all tensors to be on the same device...
"sampler_name": "DPM++ 2M", // Sampler Name
"alwayson_scripts" : {
"controlnet": {
"args" : [
{
"enabled" : true, // Enable
"input_image" : "base64 , Do not pass path directly if new", // Input image, requires base64 format
"module" : "lineart_standard (from white bg & black line)", // Preprocessor
"model" : "controlnet++_union_sdxl_promax [9460e4db]", // ControlNet Model
"weight" : 1.4, // Weight
"invert_image" : false, // Invert
"resize_mode" : "Crop and Resize", // Resize Mode. If unsure, generate an image and check the resize mode label below it, or check GitHub (though not always listed), best to trust the UI.
"lowram" : false, // Low VRAM
"processor_res" : 512, // Preprocessor Resolution
"threshold_a": 0.5, // Threshold A
"threshold_b": 0.5, // Threshold B
"pixel_perfect" : true, # Pixel Perfect Mode
"control_mode" : "My prompt is more important", // Control Mode. Can use text or number per GitHub, but using number caused errors for me.
"starting_control_step": 0, // Starting Control Step
"ending_control_step": 0.9, // Ending Control Step
"process_complete" : true, // Run preprocess button


"control_type" : "lineart" // Control Type

}
]

}
}

}