public static final class GetModelInputContentDetectResultResponseBody.TraceInfo.Builder extends Object
| 限定符和类型 | 方法和说明 |
|---|---|
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
blockWord(GetModelInputContentDetectResultResponseBody.BlockWord blockWord)
Detected keywords
|
GetModelInputContentDetectResultResponseBody.TraceInfo |
build() |
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
denyTopics(GetModelInputContentDetectResultResponseBody.DenyTopics denyTopics)
Sensitive topic object list
|
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
harmfulCategories(GetModelInputContentDetectResultResponseBody.HarmfulCategories harmfulCategories)
List of harmful category result objects
|
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
promptAttack(GetModelInputContentDetectResultResponseBody.PromptAttack promptAttack)
Prompt attack information
|
public GetModelInputContentDetectResultResponseBody.TraceInfo.Builder blockWord(GetModelInputContentDetectResultResponseBody.BlockWord blockWord)
Detected keywords
public GetModelInputContentDetectResultResponseBody.TraceInfo.Builder denyTopics(GetModelInputContentDetectResultResponseBody.DenyTopics denyTopics)
Sensitive topic object list
public GetModelInputContentDetectResultResponseBody.TraceInfo.Builder harmfulCategories(GetModelInputContentDetectResultResponseBody.HarmfulCategories harmfulCategories)
List of harmful category result objects
public GetModelInputContentDetectResultResponseBody.TraceInfo.Builder promptAttack(GetModelInputContentDetectResultResponseBody.PromptAttack promptAttack)
Prompt attack information
public GetModelInputContentDetectResultResponseBody.TraceInfo build()
Copyright © 2025. All rights reserved.