What's new in PandaStudio. Every release is signed and notarized — download and run, no extra steps.
Latest
v1.24.8
Multi-clip projects finally export with full parity to preview, and background music actually plays. Three multi-clip export bugs caught at once: camera was missing because the renderer initialised webcam dimensions from the first clip only (so any later clip whose webcam loaded first had nothing to size against), layout drifted between clips because per-clip source dimensions weren't being updated on the renderer between clips, and lipsync was off in export because DeepFilterNet's cleaned WAV is quantised to 10 ms frames so its duration didn't quite match the source video's. All three fixed: renderer scans all clips for webcam dims at init, source/webcam dims update per clip in the export loop, and the cleaned WAV is length-normalised at the boundary plus padded/trimmed in the mux filter. Background music now plays across clip boundaries (the sync code was reading per-clip source time from `<video>` instead of cumulative edited time), inserts at the playhead instead of the timeline start, and no longer crashes when the volume slider goes above 100% (HTMLAudioElement throws above 1.0 — clamped). Plus drag-and-drop clip reorder with full region migration, right-click motion graphic → promote to intro/outro clip, agent chat persists per project with image attachments, the roundness slider works in screenCover layouts, and a new 3D Title Card template via three.js.
Multi-clip export now matches preview — camera, layout, lipsync
Three latent bugs that only fired once a project had more than one clip. (1) Camera missing in export: the export renderer sized its webcam compositor from `clips[0]` only, so if clip 0 had no webcam but clip 2 did, the renderer happily exported the screen track and silently dropped the cam. Now we scan all clips for the first webcam at init. (2) Layout drift between clips: the renderer's source/webcam dimensions were set once at construction, so a clip with a different aspect ratio kept the previous clip's transform — visible as zoom regions firing in the wrong place. The renderer now exposes `updateSourceDimensions/updateWebcamSize` and the export loop calls them at every clip transition. (3) Lipsync drift: DeepFilterNet 3 outputs WAV quantised to 10 ms frames, so the cleaned audio for an 8.347 s source comes back as 8.350 s. Over a 5-minute video that's enough drift to be audible. The cleaned WAV is now padded/truncated to the exact source sample count at clean time, and the mux filter `apad,atrim,asplit` enforces it again at the boundary as a belt-and-braces guard.
Background music plays across clips, at the playhead, without crashing
Three things wrong with music in v1.24.7. (a) Cross-clip playback: the music sync code computed window membership against `<video>.currentTime`, which in PandaStudio is per-clip source time and resets to 0 on every clip swap. Music with `startMs > clip0.duration` was always 'before window' from the audio element's perspective. Fixed by passing edited-timeline time via a dedicated getter that walks `clipToEditedTime`. (b) Insert position: the 'add music' button always passed `0` as the start, dropping the track at the timeline origin instead of the playhead. Now it threads the existing `currentTimeMs` prop through. (c) Volume crash: HTMLAudioElement's `.volume` property is hard-bounded to [0, 1] and throws `IndexSizeError` outside that range — the slider went 0–2 (200%) and silently tore down the audio element on any nudge past 100%. Now clamped at write time (legacy projects with > 1.0 saved values load gracefully) and the slider is capped at 1. True overdrive needs a Web Audio GainNode — out of scope for this fix.
Cleaned-audio preview no longer crackles
Cleaned audio in preview hissed and clicked continuously even though the same WAV played fine in QuickTime. The cause was a `canplay → seek → canplay` feedback loop: each `audio.currentTime = X` triggers a buffer rebuild which on completion fires another `canplay`, and the canplay handler force-synced with a tight drift threshold, which seeked again, which fired canplay again. Diagnostic logging caught it: 13,912 audio seeks in a 5-second window, all attributed to the canplay handler. Now canplay handles only the FIRST transition from 'not-ready' to 'ready' and uses a loose 0.3 s tolerance for the alignment, the same tolerance the timeupdate path uses. No more re-entry, no more clicks.
Drag-and-drop clip reorder with full region migration
Clips on the multi-clip strip can now be dragged to a new position with an insert-line indicator showing where they'll land. The hard part wasn't the UI — it was making sure that all eight kinds of region (zoom, trim, speed, clip-transform, audio overlays, motion graphics, lower-thirds, annotations) follow their owning clip when its index changes. Each region type stores its time references in one of two coordinate systems (per-clip source time or merged-source time), and the reorder primitive in `projectEdit.ts` walks every region, recomputes its position relative to the new prefix sums, and writes back. Nine new tests in `projectPersistence.test.ts` lock down the migration math (170 unit tests passing total). Right-clicking a motion graphic in the timeline now offers 'Move to start (intro clip)' and 'Move to end (outro clip)' which use the same primitive to promote the overlay into a standalone clip at either end.
Agent chat persists per project, with image attachments
Per-project chat persistence (the AgentChatPanel hydrates from a `userData/agent-chats/<projectId>.json` sidecar on project open and replays the opencode transcript) gained two follow-ups in this release. First, image attachments: paste an image or use the new attach button to send a screenshot to the agent, encoded as an opencode `FilePartInput`. Second, a re-mount guard via `lastHydratedProjectIdRef` so React StrictMode double-renders and incidental re-mounts don't clobber the in-memory thread mid-conversation. The '+ New session' button drops the sidecar and starts fresh; the old session stays in opencode's database so accidental clicks aren't catastrophic.
Roundness slider honoured in screenCover layouts + new 3D Title Card
Two smaller polish items. The borderRadius slider was hard-coded to 0 whenever the composite layout used screenCover (webcam-only or layout-preset-driven side-by-side modes), so dragging the slider in those layouts did nothing — a user-visible bug because most webcam-recording sessions land in screenCover. Now the slider's value is honoured unconditionally; rounded corners on cover content show the wallpaper through the corners, which is the intended look. Separately, the bundled '01 - Title Card Vox' motion-graphic template was rewritten in three.js with auto-fit text scaling, a perspective grid floor, and a parallax sky — the first proper 3D template in the library, replacing the flat HTML/CSS version. The motion-philosophy doc was also realigned with HeyGen's Hyperframes prompting guide so AI agents generating motion graphics through `motion.render-html` use deterministic finite repeat counts and seeded mulberry32 PRNG for reproducible output across renders.
New `project.move-clip` MCP verb
Drag-and-drop clip reorder is now reachable from AI agents too. New `project.move-clip` verb (4-layer parity: handler in `electron/automation/handlers/project.ts`, primitive in `projectEdit.ts`, static MCP schema in `packages/mcp/bin/server.mjs`, documentation in SKILL.md) takes a `clipId` and `toIndex` and runs the same region-migration logic as the UI. CLI/MCP bumped to 1.29.1, SKILL.md to 2.36.1.
Previous releases
v1.24.7
The AI agent gets its tools back, and your conversation with it now sticks to the project. v1.24.6 shipped the agent with no MCP tool access — it could chat but couldn't actually read your project, transcripts, or regions, because the bundled MCP server was being spawned via Electron's binary without `ELECTRON_RUN_AS_NODE=1` and silently launched a second copy of the desktop app instead of running as Node. Symptom in the field: the agent fell back to shelling out to `which pandastudio` and `npx @writepanda/cli` — neither of which exists in production — and reported "PandaStudio MCP server is running?" Fixed by setting the right env on the spawn. And alongside that, agent chats now persist per project: open a project tomorrow and the conversation is right where you left it, same thread, same model. The opencode server already kept full transcripts in its own database — the only missing piece was remembering which session belonged to which project. A new "+ New session" button in the chat header drops the persisted thread when you want a clean start.
• AI agent has tool access in production again
• Per-project agent chat persistence + "+ New session" button
v1.24.6
Two production bugs caught by users on v1.24.5 and fixed the same day. The bundled AI agent (opencode) silently failed to launch in installed Mac builds — the wrapper script under node_modules has shebang `#!/usr/bin/env node`, and Electron apps started from Finder inherit a minimal PATH with no `node` on it, so `exec` died with code 127 before the script even ran. The agent tab showed `opencode server exited before binding (code=127, signal=null). stdout:` with empty stdout because the interpreter never started. PandaStudio now spawns the cached self-contained native opencode binary directly and bypasses the wrapper. Auto-transcribe also got stuck in a loop on clips that legitimately yielded zero recognised words (true silence, music-only clips, unrecognised speech) — the 'needs transcription' check matched any clip with a missing or empty transcript, so the same clip was picked again every time the empty result was saved. Empty transcripts are now treated as terminal; the user can manually re-run from the transcript panel if they suspect the result was wrong.
Three real bugs caught and fixed in a tight cycle. Recording was producing empty files on Electron 39 / Chrome 132 because MediaRecorder kept selecting AV1 codec — `isTypeSupported('video/webm;codecs=av1')` returned true but the encoder silently emitted no frames. Every Stop button click resolved to a 0-byte Blob, the finalizer's empty-blob guard returned early, and nothing saved. AV1 dropped from the preferred codec list. The aspect-ratio picker on the Home screen wasn't actually constraining the camera — it only flowed to the preview window, not the recorder, so iPhone Continuity Camera and capture-card setups silently delivered portrait video even when 16:9 was selected. Now the picked aspect builds proper getUserMedia constraints. And the floating preview no longer goes black during recording when you use a built-in webcam — only virtual cams (EOS Webcam Utility, OBS Virtual Cam, Camo, iPhone Continuity, NVIDIA Broadcast, Snap, mmhmm, Loopback) trigger the pause-preview workaround they need, and they show a clear 'Preview is not available for virtual webcams during recording' placeholder instead of just black.
• Recording works again — AV1 codec dropped
• Camera aspect ratio actually applies now
• Floating preview stays live during recording for built-in webcams
v1.24.4
The architectural fix that v1.24.3 set up but couldn't fully deliver. v1.24.3 made virtual-cam recordings stop capturing the placeholder frame, but it had to pause the floating preview during recording — DSLR / iPhone Continuity Camera users who specifically want to see themselves while recording were left with a black preview window. Wrong tradeoff. v1.24.4 inverts the architecture entirely: the camera preview window is now the SOLE getUserMedia consumer for the physical device, and live VideoFrames flow to the recorder via an Electron MessageChannel using MediaStreamTrackProcessor + MediaStreamTrackGenerator. One physical consumer, two windows seeing live frames. Result: preview stays live during recording for ALL cameras (built-in + every virtual cam — EOS Webcam Utility, OBS Virtual Cam, Camo, iPhone Continuity Camera, NVIDIA Broadcast, Snap Camera, mmhmm), recordings capture the live feed correctly, no 250 ms pause on Record click.
• Single getUserMedia consumer + cross-window VideoFrame relay
• Removed: pause/resume cycle, 250 ms Record-click latency, Recording-overlay UI
v1.24.3
Patch for v1.24.2. Two camera bugs that surfaced when a creator connected a Canon DSLR through EOS Webcam Utility — and almost certainly affected anyone using OBS Virtual Camera, Camo, or iPhone-as-webcam too. First: switching cameras in the recorder dropdown silently kept showing the old camera; the workaround was to disable webcam, then re-enable it. Second, the worse one: recordings made with virtual cameras came out with the cam's static placeholder image instead of the live feed, even though the preview window had been showing live frames seconds before clicking Record. Both fixed at the IPC layer.
• Live camera switching — no more disable + re-enable
• Virtual-cam recordings stop capturing the placeholder frame
v1.24.2
Tiny but satisfying patch: stale agent processes that used to accumulate across PandaStudio launches now get swept automatically. The bundled AI agent harness (opencode) was leaking child processes whenever the parent app shut down dirty — a force-quit, a kernel panic, killing the dev server with Ctrl+C, anything that didn't fire the normal before-quit hook. The orphaned process got reparented to launchd and sat there forever, holding a port and ~100MB of RAM. Over a few weeks of force-quits you'd accumulate 10+ stragglers. Now every spawned opencode PID is recorded to a per-user pidfile; on the next app launch we read that file and SIGTERM → SIGKILL anything still alive that we recorded as ours, then rewrite the file empty. stopOpencodeServer also escalates SIGTERM → SIGKILL after a 2-second grace so a wedged child can't survive a normal shutdown either. Net effect: even after a dirty exit, the next launch leaves you with exactly one opencode process. No accumulation across launches — ever.
• Orphaned agent-server cleanup, the right way
v1.24.1
Patch for v1.24.0. Three bugs that surfaced once people started actually publishing the videos they exported: AI-generated thumbnails were saved as WebP, but YouTube's thumbnails.set endpoint rejects image/webp ("invalid image") — so every fresh thumbnail was failing on upload. Fixed: gpt-image-2 now outputs PNG, the FFmpeg crop pipeline writes PNG, and the YouTube uploader transparently transcodes any leftover .webp file to PNG via FFmpeg before sending. Also: clicking the description field in the export-library content tab was throwing a ReferenceError and breaking the panel layout (the keyboard-hint span referenced process.platform, which doesn't exist in the renderer) — fixed with a navigator-based platform check. And the thumbnail buttons now show an inline spinner while generating instead of just going disabled, plus a new "Download" button copies the canonical thumbnail to a user-chosen location.
• Thumbnails generated as PNG (not WebP) — Publish to YouTube works
• Description field stops crashing on click
• Thumbnail buttons show a spinner while working
• Download button — save the thumbnail to your computer
v1.24.0
The convergence release. PandaStudio used to ship two parallel render pipelines: a Tier-3 PixiJS path that drove the live preview and the in-app Export Video button, and a separate Rust + Skia native render-helper that served agent-driven exports. Same project, two pipelines, two sets of bugs. The Skia path was slower by an order of magnitude on large projects and had a recurring audio-drift bug where SFX trailed the visuals they were attached to. v1.24.0 deletes the Skia path entirely — the Rust crate, the native binary, the IPC handler, the electron-builder packaging — and routes agent exports through the same Tier-3 pipeline the UI button uses, via a hidden BrowserWindow when no editor is open. One render path. Identical output regardless of who initiated the export. Plus a critical fix to the source-vs-edited time accounting in the Tier-3 exporter that was causing video overlays + zooms to render at the wrong encoded frame when trim regions were present, making SFX appear to lag the visuals in export while preview stayed in sync.
• One export pipeline — Tier-3 PixiJS for everyone
• Agent-export dispatcher — reuses your editor or spawns a hidden one
• Source-vs-edited time fix — overlays + SFX now stay in sync in export
Patch for v1.23.2. The agent's chat panel was disappearing mid-conversation: every time the agent finished an edit and called preview.show (the canonical "show me what you did" verb), the editor BrowserWindow was destroyed and recreated, which wiped the renderer process and took the chat transcript with it. v1.23.2 patched the same hazard in project.open but missed the three sibling verbs (preview.show, window.preview, window.editor) that have identical behaviour. v1.23.3 extracts the soft-open logic into a shared helper and applies it everywhere — when the editor is already showing the target project, the verb refreshes from disk + focuses the existing window instead of recreating it. AgentChatPanel keeps its chatId, transcript, and OAuth state across edits.
• Agent chat survives across the agent's own "preview" calls
v1.23.2
The frosted-glass release. Motion graphics now ship with proper backdrop-blur — when you add a glass card, the camera behind the graphic's exact alpha shape gets blurred, like every modern editor (Premiere, CapCut, DaVinci) does. Implemented end-to-end across all three render paths (preview, Skia exporter, Tier-3 PixiJS) with alpha-driven masking, so the blur follows the graphic's content shape — no rectangular halo. Plus: a new "Remix with AI" button on every motion-graphic template hands the template's HTML to the embedded agent as a visual reference and lets you describe what to change in plain English. The agent re-authors a custom variant that matches the template's look while restaging timing, content, and layout for your specific narrative. Templates stop being slot-fill containers; they start being inspirational examples. Several agent-side polish fixes too — the chat panel finally streams narration around tool calls (was dropping post-tool text), the live template preview actually animates instead of showing literal {{title}} placeholders, and the clip-transform inspector lives inside the sidebar instead of pushing it below the viewport.
• Frosted-glass backdrop blur — first-class effect on any overlay
• Glass mode for motion-graphic templates
• Remix with AI — templates as inspirational examples, not rigid slot-fills
• Live template preview actually animates
v1.23.1
Patch for v1.23.0. The bundled opencode agent binary was missing from the production Mac/Windows packages — the AI tab crashed with ENOENT on the first prompt because the package that contains the binary was in devDependencies and electron-builder strips those from the shipped app. Moved to dependencies and rebuilt; the 103 MB native binary is now baked into both architectures and the AI tab works on first launch. If you installed v1.23.0, the in-app updater will pull this patch automatically.
• Agent binary actually ships now
v1.23.0
AI agent chat lands in PandaStudio — opencode-backed multi-provider chat (ChatGPT subscription via OAuth, OpenAI API, Anthropic, Moonshot Kimi, Google, and more) with full editor context, friendly tool labels, and persistent state across tab switches. Plus three smaller fixes that have been bugging us: clip-transform regions now keep the camera on top of media overlays in every render path, music overlays default to a polite 5% volume, and export quality is finally a proper option in the Export dialog instead of an always-visible strip in the sidebar footer. Intel-Mac users also get Clean Audio back — the deep-filter binary was silently missing from the Intel dmg since v1.8.2 and is now baked in.
• AI agent chat panel
• Editor context, never in the transcript
• First-run notice + persistent chat
• Clip-transform layering: main camera stays on top
v1.22.0
Three releases worth of work consolidated. The headline: clip-transform regions — a new way to compose talking-head and explainer videos where the camera shrinks during a motion-graphic beat (camera in the bottom half of a 9:16 Short while the graphic plays up top, or camera as a right-portrait card on a 16:9 explainer in the Ali-Abdaal style) and slides back to full-frame when the beat ends. Shipped end-to-end across preview, export, automation, and the editor UI. Plus: B-roll image generation via your Replicate API key, agency-grade workspace context for cross-workspace safety, default zoom softened to 1.5× with a swoosh, FX timeline regions can now be drag-resized, and the export pipeline matches the preview frame-for-frame on every code path.
• Clip-transform regions — the explainer-beat layout
• Manual UI for clip-transform — timeline track + inspector
• Export bug fix: clip-transform applies on every code path
• FX timeline regions: drag-to-resize is back
v1.19.1
Quality release. Three timeline-editor bugs fixed (timeline zoom no longer resets when you split a clip, zooms now apply on second-and-beyond clips in multi-clip projects, individual sub-tile selection + drag-to-resize are back in multi-clip mode). YouTube-publish polish: external links open in your default browser so your real YouTube session is used; thumbnail upload failures surface the actual reason instead of silently succeeding. And the big one for agent users: motion-graphics quality is now governed by an aesthetic contract (11 Laws + visual vocabulary + 3-mode delivery playbook) so agent-authored motion graphics aim for HyperFrames-demo quality instead of template-filled defaults.
• Timeline zoom survives splits
• Zooms apply on all clips in multi-clip projects
• Individual sub-tile selection + drag-to-resize in multi-clip mode
• External links open in your real browser
v1.19.0
Two big additions: Workspaces (multi-client isolation for agencies and consultants — every project, export, and integration scoped to its workspace) and YouTube publishing (OAuth-connect your Google account, upload directly from the Export Library with custom thumbnails, metadata, and privacy settings). The YouTube refresh token stays encrypted on your machine via the OS keychain — it never leaves the device.
• Workspaces — keep clients separate
• Publish to YouTube from PandaStudio
• Local-only YouTube auth — no backend, no data leaves
• External links open in your real browser
v1.17.1
Agent-authored motion graphics are now frame-perfect. The in-house Puppeteer+FFmpeg render path has been replaced with HyperFrames (HeyGen's open-source Apache-2.0 engine) for deterministic capture via Chrome's BeginFrame API — every frame lands at its exact composition time, no more "text flew past in the first 200ms" bugs. The installer is ~40% smaller than it would have been by bundling only the headless Chromium variant both render paths can share.
• Frame-perfect motion.render-html via HyperFrames
• Concrete authoring contract + pacing rules for agents
• Installer ~40% smaller — one Chromium, two render paths
• Common-mistakes list + non-GSAP escape hatch
v1.16.0
Audio overlays are now first-class timeline regions — drag, trim, see a real waveform. Fixed a long-standing preview bug where music didn't play (export was always correct). Music panel now previews each bundled track before adding. Agents get a full SaaS-promo playbook, image-tilt rule, and a no-static-scene motion rule.
• Music + VO on the timeline, not in a sidebar
• Preview music finally plays in preview
• Music-preview button in the Audio tab
• Richer music library with agent-routing tags
v1.15.1
Anchored zooms (survive trims + speed edits), preview plays cleaned audio + music immediately, agent exports match manual exports, and the Settings modal now shows your version + an Update button on every platform.
• Zooms anchor to source-time landmarks
• New project.add-zoom argument: anchorSourceMs
• Anchor types and behavior
• Cleaned audio plays in preview without a restart
v1.14.1
Three agent-workflow fixes — live editor refresh when an agent writes, reliable Copy Project ID, and motion graphics rendering in preview.
• Editor auto-refreshes when an agent edits
• Copy Project ID actually copies
• Agent-added motion graphics render in preview
v1.14.0
Sound effects for zooms and motion graphics, fast project lookup via SQLite index, and a clutch of editor + export fixes.
• Zoom + motion-graphic sound effects
• Agent API for SFX
• project.current — target what's open
• Fast project lookup (SQLite index)
v1.13.0
Transparent overlay motion graphics — agents and the export pipeline now fully support WebM/VP9 alpha overlays composited over video.
• motion.screenshot — validate layout in under a second
• motion.concat — assemble multi-scene promos into one MP4
• project.add-audio / project.remove-audio — background music for projects
• motion.render-html: bake audio into rendered clips
v1.9.5
Arbitrary motion-graphic HTML rendering — no template wall for Claude Code.
• motion.render-html — author the animation, we'll render it
• Why the escape hatch exists
• Locked-down render window
• MCP tool + Skill docs updated
v1.9.4
preview.show now opens the editor focused on the project — same one-call UX from the agent's POV, but using the editor's battle-tested preview pane instead of a parallel chromeless window.
• Simpler 'show me' surface
• Why the change
• @writepanda/cli@1.9.4 + @writepanda/mcp@1.9.4 on npm
v1.9.3
MCP server + npm-distributed CLI. Cursor, Continue, Cline, Claude Desktop — every MCP-compliant agent can now drive PandaStudio.
• @writepanda/mcp — Model Context Protocol server on npm
• @writepanda/cli — CLI on npm too
• Three install paths, one underlying surface
• pandastudio_call escape hatch
v1.9.2
PandaStudio is now agent-editable. Full CLI surface, transcript-based editing, headless export, and a floating preview overlay AI agents can pop up at will.
• Local CLI — drive PandaStudio from the command line
• Claude Skill bundle — one-click install for AI agents
• Transcript-based editing from the CLI
• Floating preview overlay — agents show their work
v1.8.7
Lower thirds, FX overlays, bundled sound library, and end-to-end multi-clip polish.