GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder.blockWord(GetModelInputContentDetectResultResponseBody.BlockWord blockWord)
Detected keywords
|
static GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
GetModelInputContentDetectResultResponseBody.TraceInfo.builder() |
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder.denyTopics(GetModelInputContentDetectResultResponseBody.DenyTopics denyTopics)
Sensitive topic object list
|
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder.harmfulCategories(GetModelInputContentDetectResultResponseBody.HarmfulCategories harmfulCategories)
List of harmful category result objects
|
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder |
GetModelInputContentDetectResultResponseBody.TraceInfo.Builder.promptAttack(GetModelInputContentDetectResultResponseBody.PromptAttack promptAttack)
Prompt attack information
|