Video Chat
Developer could interact with an LLM AI assistant based on the context of one or multiple videos. By simply providing the videoNos
, developers can request the LLM to analyze, summarize, annotate, and more for all uploaded videos. Additionally, this API supports streaming these responses to minimize latency during response generation.
Host URL
POST /api/serve/video/chat
Request Example
import requests
headers = {"Authorization": token} # access token
data = {
"videoNos": <list of videoNos>,
"message": "<your prompt>",
"history": [],
"stream": False,
}
response = requests.post(
"https://mavi-backend.memories.ai/api/serve/video/chat",
headers=headers,
json=data
)
Request Body
{
"videoNos": [
"string"
],
"message": "string",
"history": [
{
"robot": "string",
"user": "string"
}
],
"stream": true
}
Request Parameters
Name | Location | Type | Required | Description |
---|---|---|---|---|
Authorization | header | string | Yes | authorization token |
videoNos | body | [string] | Yes | list of video numbers |
message | body | string | Yes | natural language prompt |
history | body | [object] | No | list of JSON |
» robot | body | string | Yes | historical LLM response |
» user | body | string | Yes | historical message sent to LLM |
stream | body | boolean | Yes | whether to stream the response |
Response Example
Status code 200
{
"code": "string",
"msg": "string",
"data": {
"msg": "string"
}
}
Response Result
Status code | Status code msg | Description | Data |
---|---|---|---|
200 | OK | none | Inline |
Response Structure
Status code 200
Name | Type | Required | Restriction | Description |
---|---|---|---|---|
code | string | true | none | response code |
msg | string | true | none | message with response code |
data | object | true | none | JSON data |
» msg | string | true | none | message returned by LLM |