@OpenAIDevs

103.32K 1 115

Listen to this Thread


View original tweet on Twitter

Hide Media

Today at DevDay SF, we’re launching a bunch of new capabilities to the OpenAI platform:

🗣️ Introducing the Realtime API—build speech-to-speech experiences into your applications. Like ChatGPT’s Advanced Voice, but for your own app. Rolling out in beta for developers on paid tiers.

🗃️ Prompt Caching is now available. Our models can reuse recently seen input tokens, letting you add even more cached context into our models at a 50% discount and with no effect on latency.

🗜️ We're introducing Model Distillation—which includes Evals and Stored Completions—a workflow to fine-tune smaller, cost-efficient models using outputs from large models.

🖼️ We’re adding support for vision fine-tuning. You can now fine-tune GPT-4o with images, in addition to text. Free training till October 31, up to 1M tokens a day.

✨ New Playground features—quickly turn your ideas into prototypes. Describe what you’re using a model for, and the Playground will automatically generate prompts and valid schemas for functions and structured outputs.

🪴 OpenAI o1 API—we’re expanding access to developers on usage tier 3 and increasing rate limits (to the same limits as GPT-4o) so your apps can be production-ready. Tier 5 o1-preview: 10,000 requests per minute o1-mini: 30,000 requests per minute Tier 4 o1-preview and

Today at DevDay SF, we’re launching a bunch of new capabilities to the OpenAI platform: 🗣️ Introducing the Realtime API—build speech-to-speech experiences into your applications. Like ChatGPT’s Advanced Voice, but for your own app. Rolling out in beta for developers on paid tiers. 🗃️ Prompt Caching is now available. Our models can reuse recently seen input tokens, letting you add even more cached context into our models at a 50% discount and with no effect on latency. 🗜️ We're introducing Model Distillation—which includes Evals and Stored Completions—a workflow to fine-tune smaller, cost-efficient models using outputs from large models. 🖼️ We’re adding support for vision fine-tuning. You can now fine-tune GPT-4o with images, in addition to text. Free training till October 31, up to 1M tokens a day. ✨ New Playground features—quickly turn your ideas into prototypes. Describe what you’re using a model for, and the Playground will automatically generate prompts and valid schemas for functions and structured outputs. 🪴 OpenAI o1 API—we’re expanding access to developers on usage tier 3 and increasing rate limits (to the same limits as GPT-4o) so your apps can be production-ready. Tier 5 o1-preview: 10,000 requests per minute o1-mini: 30,000 requests per minute Tier 4 o1-preview and

Unroll Another Tweet

Use Our Twitter Bot to Unroll a Thread

  1. 1 Give us a follow on Twitter. follow us
  2. 2 Drop a comment, mentioning us @unrollnow on the thread you want to Unroll.
  3. 3Wait For Some Time, We will reply to your comment with Unroll Link.