Skip to main content

Endpoint

POST /api/get-messages
Retrieve all messages for a specific chat session, including both user messages and AI responses. Messages are returned in chronological order.
This endpoint runs on Vercel’s Edge Runtime for optimal performance and low latency.

Request Body

chatId
number
required
The ID of the chat session to retrieve messages from. Must be a valid chat ID created via /api/create-chat.

Response

Returns an array of message objects.
messages
array
Array of message objects representing the conversation history

Message Object

id
number
Unique identifier for the message
chatId
number
The chat session this message belongs to
content
string
The text content of the message
role
string
The role of the message sender. Either:
  • "user" - Message sent by the human user
  • "system" - Response generated by the AI assistant
createdAt
timestamp
ISO 8601 timestamp of when the message was created

Example Request

const response = await fetch('/api/get-messages', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    chatId: 1
  })
});

const messages = await response.json();
console.log(messages);

Example Response

[
  {
    "id": 1,
    "chatId": 1,
    "content": "What is the main topic of this document?",
    "role": "user",
    "createdAt": "2024-03-15T10:30:00.000Z"
  },
  {
    "id": 2,
    "chatId": 1,
    "content": "Based on the document, the main topic is artificial intelligence and its applications in natural language processing. The document discusses various techniques including transformer models, attention mechanisms, and their use in modern AI systems.",
    "role": "system",
    "createdAt": "2024-03-15T10:30:05.000Z"
  },
  {
    "id": 3,
    "chatId": 1,
    "content": "Can you explain the attention mechanism in more detail?",
    "role": "user",
    "createdAt": "2024-03-15T10:31:00.000Z"
  },
  {
    "id": 4,
    "chatId": 1,
    "content": "According to the document, the attention mechanism allows the model to focus on different parts of the input sequence when producing each output token. It calculates attention weights that determine how much importance to give to each input element, enabling the model to capture long-range dependencies more effectively than traditional RNNs.",
    "role": "system",
    "createdAt": "2024-03-15T10:31:08.000Z"
  }
]

Response Structure

Messages are returned as a JSON array, ordered by creation time (oldest first):
  • Empty array [] is returned if no messages exist for the chat
  • Each message includes complete metadata
  • Timestamps are in ISO 8601 format (UTC)

Use Cases

Load Conversation History

When a user opens an existing chat, retrieve all previous messages:
const loadChat = async (chatId: number) => {
  const messages = await getMessages(chatId);
  displayMessages(messages);
};

Sync Messages

Periodically sync messages to ensure clients have the latest data:
const syncMessages = async (chatId: number, lastMessageId: number) => {
  const allMessages = await getMessages(chatId);
  const newMessages = allMessages.filter(msg => msg.id > lastMessageId);
  return newMessages;
};

Export Conversation

Export the full conversation history for backup or analysis:
const exportChat = async (chatId: number) => {
  const messages = await getMessages(chatId);
  const text = messages
    .map(msg => `[${msg.role.toUpperCase()}] ${msg.content}`)
    .join('\n\n');
  return text;
};

Message Roles

The role field uses the user_system_enum database enum:
RoleDescription
userMessages sent by the human user
systemResponses generated by the AI assistant
The role "system" refers to AI-generated messages, not system administration messages. All AI responses use the "system" role.

Database Schema

Messages are stored with the following schema:
FieldTypeDescription
idserialPrimary key (auto-increment)
chatIdintegerForeign key to chats table
contenttextMessage content (unlimited length)
roleenumEither "user" or "system"
createdAttimestampCreation timestamp (auto-generated)

Performance Considerations

  • The endpoint runs on Edge Runtime for low latency
  • Messages are retrieved with a single database query
  • No pagination is implemented - all messages are returned
  • For chats with 1000+ messages, consider implementing pagination
If you have very long conversations, consider implementing client-side pagination or lazy loading to improve UI performance.

Error Handling

The endpoint returns appropriate HTTP status codes:
  • 200: Success - returns message array (may be empty)
  • 500: Internal server error - database query failed
try {
  const messages = await getMessages(chatId);
  if (messages.length === 0) {
    console.log('No messages yet');
  }
} catch (error) {
  console.error('Failed to load messages:', error);
  // Show error UI to user
}

Best Practices

  • Cache messages locally to reduce API calls
  • Implement optimistic UI updates when sending new messages
  • Use this endpoint when loading a chat, not after every message
  • Handle empty message arrays gracefully
  • Display timestamps in the user’s local timezone
  • Show message status indicators (sending, sent, delivered)
  • Create new messages via /api/chat
  • Create chat sessions via /api/create-chat
  • Messages are automatically saved during chat streaming

Build docs developers (and LLMs) love