Skip to main content
Version: v1.0

Video Chat

Developer could interact with an LLM AI assistant based on the context of one or multiple videos. By simply providing the videoNos, developers can request the LLM to analyze, summarize, annotate, and more for all uploaded videos. Additionally, this API supports streaming these responses to minimize latency during response generation.

Host URL

POST /api/serve/video/chat

Request Example

import requests

headers = {"Authorization": token} # access token
data = {
"videoNos": <list of videoNos>,
"message": "<your prompt>",
"history": [],
"stream": False,
}
response = requests.post(
"https://mavi-backend.memories.ai/api/serve/video/chat",
headers=headers,
json=data
)

Request Body

{
"videoNos": [
"string"
],
"message": "string",
"history": [
{
"robot": "string",
"user": "string"
}
],
"stream": true
}

Request Parameters

NameLocationTypeRequiredDescription
AuthorizationheaderstringYesauthorization token
videoNosbody[string]Yeslist of video numbers
messagebodystringYesnatural language prompt
historybody[object]Nolist of JSON
» robotbodystringYeshistorical LLM response
» userbodystringYeshistorical message sent to LLM
streambodybooleanYeswhether to stream the response

Response Example

Status code 200

{
"code": "string",
"msg": "string",
"data": {
"msg": "string"
}
}

Response Result

Status codeStatus code msgDescriptionData
200OKnoneInline

Response Structure

Status code 200

NameTypeRequiredRestrictionDescription
codestringtruenoneresponse code
msgstringtruenonemessage with response code
dataobjecttruenoneJSON data
» msgstringtruenonemessage returned by LLM