LightSession icon

LightSession

Keep ChatGPT fast in long conversations

Free

The Problem

Long ChatGPT threads are brutal for the browser: the UI keeps every message in the DOM and the tab slowly turns into molasses.

Scroll becomes choppy, typing lags, devtools crawl. After 100+ messages, even a powerful machine starts to struggle.

The Solution

LightSession fixes UI lag by trimming old DOM nodes on the client side while keeping the actual conversation intact on OpenAI's side.

Your full conversation is always preserved — just refresh to see it all again.

Features

Automatic trimming — keeps only the last N messages visible (1-100 configurable)
DOM batching — node removals stay within 16ms budget for smooth 60fps scrolling
Smart timing — waits for AI responses to finish streaming before trimming
Status indicator — optional on-page pill showing trim statistics
Zero tracking — 100% local, no servers, no analytics, no telemetry

Who is this for?

  • People who keep very long ChatGPT threads (100+ messages)
  • Developers who use ChatGPT for debugging, code reviews, or long refactors
  • Anyone whose ChatGPT tab becomes sluggish after a while

FAQ

Does this reduce the model's context?

No. LightSession only trims the DOM (what the browser renders), not the data stored by OpenAI. The conversation on OpenAI's servers remains intact.

Is my data safe?

Yes. No external network requests, no analytics, no telemetry. Settings are stored locally in browser.storage.local.

Feedback

Found a bug or have feedback?

Create an issue on GitHub.

Open Source

LightSession is fully open source. Inspect the code, suggest improvements, or fork it for your own use.

View on GitHub →