public static final class ModelInputContentSyncDetectResponseBody.TraceInfo.Builder extends Object
| 限定符和类型 | 方法和说明 |
|---|---|
ModelInputContentSyncDetectResponseBody.TraceInfo.Builder |
blockWord(ModelInputContentSyncDetectResponseBody.BlockWord blockWord)
Detected keywords
|
ModelInputContentSyncDetectResponseBody.TraceInfo |
build() |
ModelInputContentSyncDetectResponseBody.TraceInfo.Builder |
denyTopics(ModelInputContentSyncDetectResponseBody.DenyTopics denyTopics)
Sensitive topic object list
|
ModelInputContentSyncDetectResponseBody.TraceInfo.Builder |
harmfulCategories(ModelInputContentSyncDetectResponseBody.HarmfulCategories harmfulCategories)
HarmfulCategories
|
ModelInputContentSyncDetectResponseBody.TraceInfo.Builder |
promptAttack(ModelInputContentSyncDetectResponseBody.PromptAttack promptAttack)
Prompt attack information
|
ModelInputContentSyncDetectResponseBody.TraceInfo.Builder |
sensitiveType(ModelInputContentSyncDetectResponseBody.SensitiveType sensitiveType)
SensitiveType.
|
public ModelInputContentSyncDetectResponseBody.TraceInfo.Builder blockWord(ModelInputContentSyncDetectResponseBody.BlockWord blockWord)
Detected keywords
public ModelInputContentSyncDetectResponseBody.TraceInfo.Builder denyTopics(ModelInputContentSyncDetectResponseBody.DenyTopics denyTopics)
Sensitive topic object list
public ModelInputContentSyncDetectResponseBody.TraceInfo.Builder harmfulCategories(ModelInputContentSyncDetectResponseBody.HarmfulCategories harmfulCategories)
HarmfulCategories
public ModelInputContentSyncDetectResponseBody.TraceInfo.Builder promptAttack(ModelInputContentSyncDetectResponseBody.PromptAttack promptAttack)
Prompt attack information
public ModelInputContentSyncDetectResponseBody.TraceInfo.Builder sensitiveType(ModelInputContentSyncDetectResponseBody.SensitiveType sensitiveType)
public ModelInputContentSyncDetectResponseBody.TraceInfo build()
Copyright © 2025. All rights reserved.